Test Report: Docker_Linux_crio 21683

                    
                      1b58c48826b6fb4d6f7297e87780eae465bc5f37:2025-10-19:41984
                    
                

Test fail (41/327)

Order failed test Duration
29 TestAddons/serial/Volcano 0.25
35 TestAddons/parallel/Registry 13.53
36 TestAddons/parallel/RegistryCreds 0.4
37 TestAddons/parallel/Ingress 147.01
38 TestAddons/parallel/InspektorGadget 5.25
39 TestAddons/parallel/MetricsServer 5.36
41 TestAddons/parallel/CSI 44.88
42 TestAddons/parallel/Headlamp 2.67
43 TestAddons/parallel/CloudSpanner 5.27
44 TestAddons/parallel/LocalPath 8.12
45 TestAddons/parallel/NvidiaDevicePlugin 5.25
46 TestAddons/parallel/Yakd 5.24
47 TestAddons/parallel/AmdGpuDevicePlugin 5.26
91 TestFunctional/parallel/DashboardCmd 302.24
98 TestFunctional/parallel/ServiceCmdConnect 602.93
100 TestFunctional/parallel/PersistentVolumeClaim 377.99
104 TestFunctional/parallel/MySQL 602.76
115 TestFunctional/parallel/ServiceCmd/DeployApp 600.66
141 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.93
142 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.93
146 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.37
147 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.29
149 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.19
150 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.35
153 TestFunctional/parallel/ServiceCmd/HTTPS 0.54
154 TestFunctional/parallel/ServiceCmd/Format 0.54
155 TestFunctional/parallel/ServiceCmd/URL 0.54
191 TestJSONOutput/pause/Command 2.2
197 TestJSONOutput/unpause/Command 1.63
248 TestPreload 427.89
280 TestPause/serial/Pause 6.5
298 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.76
303 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.09
312 TestStartStop/group/old-k8s-version/serial/Pause 5.94
318 TestStartStop/group/no-preload/serial/Pause 6.43
322 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.52
325 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.23
330 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.98
339 TestStartStop/group/newest-cni/serial/Pause 6.94
345 TestStartStop/group/embed-certs/serial/Pause 6.89
352 TestStartStop/group/default-k8s-diff-port/serial/Pause 6.09
x
+
TestAddons/serial/Volcano (0.25s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-557770 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-557770 addons disable volcano --alsologtostderr -v=1: exit status 11 (248.296497ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 16:22:48.572724   17040 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:22:48.572904   17040 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:22:48.572914   17040 out.go:374] Setting ErrFile to fd 2...
	I1019 16:22:48.572918   17040 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:22:48.573150   17040 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3731/.minikube/bin
	I1019 16:22:48.573407   17040 mustload.go:66] Loading cluster: addons-557770
	I1019 16:22:48.573746   17040 config.go:182] Loaded profile config "addons-557770": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:22:48.573774   17040 addons.go:607] checking whether the cluster is paused
	I1019 16:22:48.573854   17040 config.go:182] Loaded profile config "addons-557770": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:22:48.573867   17040 host.go:66] Checking if "addons-557770" exists ...
	I1019 16:22:48.574306   17040 cli_runner.go:164] Run: docker container inspect addons-557770 --format={{.State.Status}}
	I1019 16:22:48.594316   17040 ssh_runner.go:195] Run: systemctl --version
	I1019 16:22:48.594370   17040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:22:48.613780   17040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/addons-557770/id_rsa Username:docker}
	I1019 16:22:48.710800   17040 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 16:22:48.710882   17040 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 16:22:48.740643   17040 cri.go:89] found id: "9d790d16e895919a1170fc8db9b5e7568e08763e98bd7e71211780308a212097"
	I1019 16:22:48.740685   17040 cri.go:89] found id: "5888ac56628ff93145c6aece5f1120cd8eb8decc9375b9ca12c2230ec4c1defb"
	I1019 16:22:48.740691   17040 cri.go:89] found id: "50a236ad43b3d311660fac7a7b4f30d2e7c648b9652ad78e1ba950060276f923"
	I1019 16:22:48.740695   17040 cri.go:89] found id: "05400eb2fd5eb5dab037918d1625bce04346352c5eb622cb825bc5d8be2d837e"
	I1019 16:22:48.740698   17040 cri.go:89] found id: "2e93d36349d27da10dcd3ea12fffbcf50506c87892ebdc8c2ea984ea7cc55440"
	I1019 16:22:48.740705   17040 cri.go:89] found id: "c503d7edb96f3d0cc1492dbef9c424367ec8ca4b0115a5c993d0ff8d12eedf93"
	I1019 16:22:48.740709   17040 cri.go:89] found id: "bfa49cfea4019265b1d3841ebbabe50ea8270a753eb9f5e68565515d284992d5"
	I1019 16:22:48.740711   17040 cri.go:89] found id: "aeccc8f632779e72d498e4e2c918b7701815fd1595f5123cba0d3e3306a5b8fa"
	I1019 16:22:48.740714   17040 cri.go:89] found id: "d3273937d7efc2090dbf92878ee357891bac8e1fd1ac928ff06fa33207f8acd6"
	I1019 16:22:48.740733   17040 cri.go:89] found id: "6ef8a71fe5d3990e63bf621adc1773300aea754a4ae4ba88cb908af6033475fc"
	I1019 16:22:48.740739   17040 cri.go:89] found id: "6e2091eec84de97496ef71ec05866a5e84fc72136ebd8fba19aae91efa63d9e3"
	I1019 16:22:48.740742   17040 cri.go:89] found id: "037c0332147ce352c19f75a68693f361440e2fb38139735ad856f224c9190c1f"
	I1019 16:22:48.740745   17040 cri.go:89] found id: "65863d6eb03db19db922d886803cf53a0cb4c1aa50b354e37b1d7abdbe4cd53e"
	I1019 16:22:48.740747   17040 cri.go:89] found id: "28dca4a98797d39e429fa32ded7e16d2d4d1e675523de1c726f9a2abb9cf1ec4"
	I1019 16:22:48.740750   17040 cri.go:89] found id: "893241a14d701c6bdcba867e2fd1fe8a4cd5738e5119b7e04c7002b291849f9d"
	I1019 16:22:48.740761   17040 cri.go:89] found id: "b929bd44832f67814ec657bdb9e012b7c97cd64985f113692cbc885e53eeb5ed"
	I1019 16:22:48.740773   17040 cri.go:89] found id: "ef3b3e7a4894884f7c8bbd755c56a7700dc943a7928aa0e52f4dbdc81806fd1b"
	I1019 16:22:48.740777   17040 cri.go:89] found id: "8d61047c5353c2c688e2f172eaaf6ad4bd6c3816d4a19f2de39251e386867693"
	I1019 16:22:48.740779   17040 cri.go:89] found id: "91694d16fb2b0ece4c3071344d073973e3e529ad5d23e63ffcbb1aa13844606e"
	I1019 16:22:48.740785   17040 cri.go:89] found id: "00766cde139826cec0b8b6d979bc06db1a8e4c72cc21abb45e687eb3efc6283c"
	I1019 16:22:48.740791   17040 cri.go:89] found id: "49e88f9620ecba4c70cd7b60c353fac9c19df294f9e1cedfb43e5e9d57bb6468"
	I1019 16:22:48.740794   17040 cri.go:89] found id: "64ffa4d775be3d28668524f2c7dd8e1c6ad5eeba22fb96d610f66530c5502bfd"
	I1019 16:22:48.740797   17040 cri.go:89] found id: "7ea12626c6ada213850e1cea66bee9586ca3b56572fe37a2f1d96af64e737eb5"
	I1019 16:22:48.740799   17040 cri.go:89] found id: "75207afa634f810f84239fed0d54861fdc8bcb8c5f5a1e05bb98430d84584d65"
	I1019 16:22:48.740801   17040 cri.go:89] found id: ""
	I1019 16:22:48.740865   17040 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 16:22:48.755631   17040 out.go:203] 
	W1019 16:22:48.756815   17040 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:22:48Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:22:48Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 16:22:48.756847   17040 out.go:285] * 
	* 
	W1019 16:22:48.760459   17040 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 16:22:48.761679   17040 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-557770 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.25s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.53s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.647887ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-fcnms" [85084e74-70aa-4ec6-a747-cd19730ff37b] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002801934s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-cbqn4" [3d7d6881-00f4-45ae-aa7e-0d2b40fe10b2] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.032636126s
addons_test.go:392: (dbg) Run:  kubectl --context addons-557770 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-557770 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-557770 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.04583555s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-557770 ip
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-557770 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-557770 addons disable registry --alsologtostderr -v=1: exit status 11 (233.671965ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 16:23:09.935687   19716 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:23:09.935988   19716 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:23:09.935999   19716 out.go:374] Setting ErrFile to fd 2...
	I1019 16:23:09.936004   19716 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:23:09.936261   19716 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3731/.minikube/bin
	I1019 16:23:09.936594   19716 mustload.go:66] Loading cluster: addons-557770
	I1019 16:23:09.936945   19716 config.go:182] Loaded profile config "addons-557770": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:23:09.936970   19716 addons.go:607] checking whether the cluster is paused
	I1019 16:23:09.937079   19716 config.go:182] Loaded profile config "addons-557770": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:23:09.937096   19716 host.go:66] Checking if "addons-557770" exists ...
	I1019 16:23:09.937485   19716 cli_runner.go:164] Run: docker container inspect addons-557770 --format={{.State.Status}}
	I1019 16:23:09.955061   19716 ssh_runner.go:195] Run: systemctl --version
	I1019 16:23:09.955129   19716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:23:09.974291   19716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/addons-557770/id_rsa Username:docker}
	I1019 16:23:10.070210   19716 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 16:23:10.070304   19716 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 16:23:10.100356   19716 cri.go:89] found id: "9d790d16e895919a1170fc8db9b5e7568e08763e98bd7e71211780308a212097"
	I1019 16:23:10.100378   19716 cri.go:89] found id: "5888ac56628ff93145c6aece5f1120cd8eb8decc9375b9ca12c2230ec4c1defb"
	I1019 16:23:10.100382   19716 cri.go:89] found id: "50a236ad43b3d311660fac7a7b4f30d2e7c648b9652ad78e1ba950060276f923"
	I1019 16:23:10.100386   19716 cri.go:89] found id: "05400eb2fd5eb5dab037918d1625bce04346352c5eb622cb825bc5d8be2d837e"
	I1019 16:23:10.100391   19716 cri.go:89] found id: "2e93d36349d27da10dcd3ea12fffbcf50506c87892ebdc8c2ea984ea7cc55440"
	I1019 16:23:10.100395   19716 cri.go:89] found id: "c503d7edb96f3d0cc1492dbef9c424367ec8ca4b0115a5c993d0ff8d12eedf93"
	I1019 16:23:10.100399   19716 cri.go:89] found id: "bfa49cfea4019265b1d3841ebbabe50ea8270a753eb9f5e68565515d284992d5"
	I1019 16:23:10.100402   19716 cri.go:89] found id: "aeccc8f632779e72d498e4e2c918b7701815fd1595f5123cba0d3e3306a5b8fa"
	I1019 16:23:10.100406   19716 cri.go:89] found id: "d3273937d7efc2090dbf92878ee357891bac8e1fd1ac928ff06fa33207f8acd6"
	I1019 16:23:10.100414   19716 cri.go:89] found id: "6ef8a71fe5d3990e63bf621adc1773300aea754a4ae4ba88cb908af6033475fc"
	I1019 16:23:10.100418   19716 cri.go:89] found id: "6e2091eec84de97496ef71ec05866a5e84fc72136ebd8fba19aae91efa63d9e3"
	I1019 16:23:10.100422   19716 cri.go:89] found id: "037c0332147ce352c19f75a68693f361440e2fb38139735ad856f224c9190c1f"
	I1019 16:23:10.100426   19716 cri.go:89] found id: "65863d6eb03db19db922d886803cf53a0cb4c1aa50b354e37b1d7abdbe4cd53e"
	I1019 16:23:10.100430   19716 cri.go:89] found id: "28dca4a98797d39e429fa32ded7e16d2d4d1e675523de1c726f9a2abb9cf1ec4"
	I1019 16:23:10.100437   19716 cri.go:89] found id: "893241a14d701c6bdcba867e2fd1fe8a4cd5738e5119b7e04c7002b291849f9d"
	I1019 16:23:10.100447   19716 cri.go:89] found id: "b929bd44832f67814ec657bdb9e012b7c97cd64985f113692cbc885e53eeb5ed"
	I1019 16:23:10.100454   19716 cri.go:89] found id: "ef3b3e7a4894884f7c8bbd755c56a7700dc943a7928aa0e52f4dbdc81806fd1b"
	I1019 16:23:10.100458   19716 cri.go:89] found id: "8d61047c5353c2c688e2f172eaaf6ad4bd6c3816d4a19f2de39251e386867693"
	I1019 16:23:10.100460   19716 cri.go:89] found id: "91694d16fb2b0ece4c3071344d073973e3e529ad5d23e63ffcbb1aa13844606e"
	I1019 16:23:10.100462   19716 cri.go:89] found id: "00766cde139826cec0b8b6d979bc06db1a8e4c72cc21abb45e687eb3efc6283c"
	I1019 16:23:10.100465   19716 cri.go:89] found id: "49e88f9620ecba4c70cd7b60c353fac9c19df294f9e1cedfb43e5e9d57bb6468"
	I1019 16:23:10.100467   19716 cri.go:89] found id: "64ffa4d775be3d28668524f2c7dd8e1c6ad5eeba22fb96d610f66530c5502bfd"
	I1019 16:23:10.100470   19716 cri.go:89] found id: "7ea12626c6ada213850e1cea66bee9586ca3b56572fe37a2f1d96af64e737eb5"
	I1019 16:23:10.100474   19716 cri.go:89] found id: "75207afa634f810f84239fed0d54861fdc8bcb8c5f5a1e05bb98430d84584d65"
	I1019 16:23:10.100481   19716 cri.go:89] found id: ""
	I1019 16:23:10.100613   19716 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 16:23:10.115801   19716 out.go:203] 
	W1019 16:23:10.117112   19716 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:23:10Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:23:10Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 16:23:10.117134   19716 out.go:285] * 
	* 
	W1019 16:23:10.120232   19716 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 16:23:10.121443   19716 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-557770 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (13.53s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.4s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.381354ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-557770
addons_test.go:332: (dbg) Run:  kubectl --context addons-557770 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-557770 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-557770 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (236.100214ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 16:23:10.328400   19815 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:23:10.328732   19815 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:23:10.328744   19815 out.go:374] Setting ErrFile to fd 2...
	I1019 16:23:10.328747   19815 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:23:10.328917   19815 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3731/.minikube/bin
	I1019 16:23:10.329181   19815 mustload.go:66] Loading cluster: addons-557770
	I1019 16:23:10.329493   19815 config.go:182] Loaded profile config "addons-557770": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:23:10.329508   19815 addons.go:607] checking whether the cluster is paused
	I1019 16:23:10.329600   19815 config.go:182] Loaded profile config "addons-557770": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:23:10.329614   19815 host.go:66] Checking if "addons-557770" exists ...
	I1019 16:23:10.329964   19815 cli_runner.go:164] Run: docker container inspect addons-557770 --format={{.State.Status}}
	I1019 16:23:10.349205   19815 ssh_runner.go:195] Run: systemctl --version
	I1019 16:23:10.349269   19815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:23:10.368361   19815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/addons-557770/id_rsa Username:docker}
	I1019 16:23:10.466381   19815 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 16:23:10.466494   19815 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 16:23:10.496706   19815 cri.go:89] found id: "9d790d16e895919a1170fc8db9b5e7568e08763e98bd7e71211780308a212097"
	I1019 16:23:10.496726   19815 cri.go:89] found id: "5888ac56628ff93145c6aece5f1120cd8eb8decc9375b9ca12c2230ec4c1defb"
	I1019 16:23:10.496729   19815 cri.go:89] found id: "50a236ad43b3d311660fac7a7b4f30d2e7c648b9652ad78e1ba950060276f923"
	I1019 16:23:10.496733   19815 cri.go:89] found id: "05400eb2fd5eb5dab037918d1625bce04346352c5eb622cb825bc5d8be2d837e"
	I1019 16:23:10.496736   19815 cri.go:89] found id: "2e93d36349d27da10dcd3ea12fffbcf50506c87892ebdc8c2ea984ea7cc55440"
	I1019 16:23:10.496739   19815 cri.go:89] found id: "c503d7edb96f3d0cc1492dbef9c424367ec8ca4b0115a5c993d0ff8d12eedf93"
	I1019 16:23:10.496741   19815 cri.go:89] found id: "bfa49cfea4019265b1d3841ebbabe50ea8270a753eb9f5e68565515d284992d5"
	I1019 16:23:10.496744   19815 cri.go:89] found id: "aeccc8f632779e72d498e4e2c918b7701815fd1595f5123cba0d3e3306a5b8fa"
	I1019 16:23:10.496746   19815 cri.go:89] found id: "d3273937d7efc2090dbf92878ee357891bac8e1fd1ac928ff06fa33207f8acd6"
	I1019 16:23:10.496752   19815 cri.go:89] found id: "6ef8a71fe5d3990e63bf621adc1773300aea754a4ae4ba88cb908af6033475fc"
	I1019 16:23:10.496756   19815 cri.go:89] found id: "6e2091eec84de97496ef71ec05866a5e84fc72136ebd8fba19aae91efa63d9e3"
	I1019 16:23:10.496760   19815 cri.go:89] found id: "037c0332147ce352c19f75a68693f361440e2fb38139735ad856f224c9190c1f"
	I1019 16:23:10.496764   19815 cri.go:89] found id: "65863d6eb03db19db922d886803cf53a0cb4c1aa50b354e37b1d7abdbe4cd53e"
	I1019 16:23:10.496768   19815 cri.go:89] found id: "28dca4a98797d39e429fa32ded7e16d2d4d1e675523de1c726f9a2abb9cf1ec4"
	I1019 16:23:10.496773   19815 cri.go:89] found id: "893241a14d701c6bdcba867e2fd1fe8a4cd5738e5119b7e04c7002b291849f9d"
	I1019 16:23:10.496786   19815 cri.go:89] found id: "b929bd44832f67814ec657bdb9e012b7c97cd64985f113692cbc885e53eeb5ed"
	I1019 16:23:10.496794   19815 cri.go:89] found id: "ef3b3e7a4894884f7c8bbd755c56a7700dc943a7928aa0e52f4dbdc81806fd1b"
	I1019 16:23:10.496800   19815 cri.go:89] found id: "8d61047c5353c2c688e2f172eaaf6ad4bd6c3816d4a19f2de39251e386867693"
	I1019 16:23:10.496802   19815 cri.go:89] found id: "91694d16fb2b0ece4c3071344d073973e3e529ad5d23e63ffcbb1aa13844606e"
	I1019 16:23:10.496804   19815 cri.go:89] found id: "00766cde139826cec0b8b6d979bc06db1a8e4c72cc21abb45e687eb3efc6283c"
	I1019 16:23:10.496807   19815 cri.go:89] found id: "49e88f9620ecba4c70cd7b60c353fac9c19df294f9e1cedfb43e5e9d57bb6468"
	I1019 16:23:10.496810   19815 cri.go:89] found id: "64ffa4d775be3d28668524f2c7dd8e1c6ad5eeba22fb96d610f66530c5502bfd"
	I1019 16:23:10.496812   19815 cri.go:89] found id: "7ea12626c6ada213850e1cea66bee9586ca3b56572fe37a2f1d96af64e737eb5"
	I1019 16:23:10.496814   19815 cri.go:89] found id: "75207afa634f810f84239fed0d54861fdc8bcb8c5f5a1e05bb98430d84584d65"
	I1019 16:23:10.496816   19815 cri.go:89] found id: ""
	I1019 16:23:10.496855   19815 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 16:23:10.511112   19815 out.go:203] 
	W1019 16:23:10.512599   19815 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:23:10Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:23:10Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 16:23:10.512619   19815 out.go:285] * 
	* 
	W1019 16:23:10.515640   19815 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 16:23:10.517092   19815 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-557770 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.40s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (147.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-557770 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-557770 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-557770 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [dbde3d7d-0716-40fc-9aa6-31e3eefed2af] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [dbde3d7d-0716-40fc-9aa6-31e3eefed2af] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003591157s
I1019 16:23:16.575555    7228 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-557770 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-557770 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m14.315383533s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-557770 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-557770 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-557770
helpers_test.go:243: (dbg) docker inspect addons-557770:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e9d7c66cdc0d595285ce592a3326e3fd70a592e77145a07f4ae472ccf14f076f",
	        "Created": "2025-10-19T16:21:01.416576155Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 9251,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T16:21:01.449249778Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/e9d7c66cdc0d595285ce592a3326e3fd70a592e77145a07f4ae472ccf14f076f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e9d7c66cdc0d595285ce592a3326e3fd70a592e77145a07f4ae472ccf14f076f/hostname",
	        "HostsPath": "/var/lib/docker/containers/e9d7c66cdc0d595285ce592a3326e3fd70a592e77145a07f4ae472ccf14f076f/hosts",
	        "LogPath": "/var/lib/docker/containers/e9d7c66cdc0d595285ce592a3326e3fd70a592e77145a07f4ae472ccf14f076f/e9d7c66cdc0d595285ce592a3326e3fd70a592e77145a07f4ae472ccf14f076f-json.log",
	        "Name": "/addons-557770",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-557770:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-557770",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e9d7c66cdc0d595285ce592a3326e3fd70a592e77145a07f4ae472ccf14f076f",
	                "LowerDir": "/var/lib/docker/overlay2/5ca175b9498e0f07cca83ff2f3379fedc9eb67217735198daa727f179161e09b-init/diff:/var/lib/docker/overlay2/0516a6de822f96a429e000bb6523437ca93a66a6aecaba561c6abf45d0d1defe/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5ca175b9498e0f07cca83ff2f3379fedc9eb67217735198daa727f179161e09b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5ca175b9498e0f07cca83ff2f3379fedc9eb67217735198daa727f179161e09b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5ca175b9498e0f07cca83ff2f3379fedc9eb67217735198daa727f179161e09b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-557770",
	                "Source": "/var/lib/docker/volumes/addons-557770/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-557770",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-557770",
	                "name.minikube.sigs.k8s.io": "addons-557770",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cb12be6277a9b3b2d91b7c1033229f388039b0b6b3aefe597e1caaadd677c015",
	            "SandboxKey": "/var/run/docker/netns/cb12be6277a9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-557770": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "de:7a:83:c2:9e:43",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fa72c0f0c5a3e65694960e1b32d75351c671796cc32b6ceb00202dcb25d58472",
	                    "EndpointID": "34f039e463948afb9436de9842a2de7b6c75b6370561f32f3a140ab1933e5b10",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-557770",
	                        "e9d7c66cdc0d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-557770 -n addons-557770
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-557770 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-557770 logs -n 25: (1.214953973s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-444864 --alsologtostderr --binary-mirror http://127.0.0.1:37593 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-444864 │ jenkins │ v1.37.0 │ 19 Oct 25 16:20 UTC │                     │
	│ delete  │ -p binary-mirror-444864                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-444864 │ jenkins │ v1.37.0 │ 19 Oct 25 16:20 UTC │ 19 Oct 25 16:20 UTC │
	│ addons  │ disable dashboard -p addons-557770                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-557770        │ jenkins │ v1.37.0 │ 19 Oct 25 16:20 UTC │                     │
	│ addons  │ enable dashboard -p addons-557770                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-557770        │ jenkins │ v1.37.0 │ 19 Oct 25 16:20 UTC │                     │
	│ start   │ -p addons-557770 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-557770        │ jenkins │ v1.37.0 │ 19 Oct 25 16:20 UTC │ 19 Oct 25 16:22 UTC │
	│ addons  │ addons-557770 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-557770        │ jenkins │ v1.37.0 │ 19 Oct 25 16:22 UTC │                     │
	│ addons  │ addons-557770 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-557770        │ jenkins │ v1.37.0 │ 19 Oct 25 16:22 UTC │                     │
	│ addons  │ enable headlamp -p addons-557770 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-557770        │ jenkins │ v1.37.0 │ 19 Oct 25 16:22 UTC │                     │
	│ addons  │ addons-557770 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-557770        │ jenkins │ v1.37.0 │ 19 Oct 25 16:22 UTC │                     │
	│ addons  │ addons-557770 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-557770        │ jenkins │ v1.37.0 │ 19 Oct 25 16:23 UTC │                     │
	│ ssh     │ addons-557770 ssh cat /opt/local-path-provisioner/pvc-af712dd3-5e9b-4878-a6fc-b8b4a8617cea_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-557770        │ jenkins │ v1.37.0 │ 19 Oct 25 16:23 UTC │ 19 Oct 25 16:23 UTC │
	│ addons  │ addons-557770 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-557770        │ jenkins │ v1.37.0 │ 19 Oct 25 16:23 UTC │                     │
	│ addons  │ addons-557770 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-557770        │ jenkins │ v1.37.0 │ 19 Oct 25 16:23 UTC │                     │
	│ addons  │ addons-557770 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-557770        │ jenkins │ v1.37.0 │ 19 Oct 25 16:23 UTC │                     │
	│ addons  │ addons-557770 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-557770        │ jenkins │ v1.37.0 │ 19 Oct 25 16:23 UTC │                     │
	│ ip      │ addons-557770 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-557770        │ jenkins │ v1.37.0 │ 19 Oct 25 16:23 UTC │ 19 Oct 25 16:23 UTC │
	│ addons  │ addons-557770 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-557770        │ jenkins │ v1.37.0 │ 19 Oct 25 16:23 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-557770                                                                                                                                                                                                                                                                                                                                                                                           │ addons-557770        │ jenkins │ v1.37.0 │ 19 Oct 25 16:23 UTC │ 19 Oct 25 16:23 UTC │
	│ addons  │ addons-557770 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-557770        │ jenkins │ v1.37.0 │ 19 Oct 25 16:23 UTC │                     │
	│ addons  │ addons-557770 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-557770        │ jenkins │ v1.37.0 │ 19 Oct 25 16:23 UTC │                     │
	│ addons  │ addons-557770 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-557770        │ jenkins │ v1.37.0 │ 19 Oct 25 16:23 UTC │                     │
	│ ssh     │ addons-557770 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-557770        │ jenkins │ v1.37.0 │ 19 Oct 25 16:23 UTC │                     │
	│ addons  │ addons-557770 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-557770        │ jenkins │ v1.37.0 │ 19 Oct 25 16:23 UTC │                     │
	│ addons  │ addons-557770 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-557770        │ jenkins │ v1.37.0 │ 19 Oct 25 16:23 UTC │                     │
	│ ip      │ addons-557770 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-557770        │ jenkins │ v1.37.0 │ 19 Oct 25 16:25 UTC │ 19 Oct 25 16:25 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 16:20:37
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 16:20:37.001527    8599 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:20:37.001638    8599 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:20:37.001643    8599 out.go:374] Setting ErrFile to fd 2...
	I1019 16:20:37.001647    8599 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:20:37.001854    8599 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3731/.minikube/bin
	I1019 16:20:37.002393    8599 out.go:368] Setting JSON to false
	I1019 16:20:37.003191    8599 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":183,"bootTime":1760890654,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 16:20:37.003281    8599 start.go:143] virtualization: kvm guest
	I1019 16:20:37.005248    8599 out.go:179] * [addons-557770] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 16:20:37.006659    8599 notify.go:221] Checking for updates...
	I1019 16:20:37.006723    8599 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 16:20:37.008491    8599 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 16:20:37.010236    8599 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-3731/kubeconfig
	I1019 16:20:37.011864    8599 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-3731/.minikube
	I1019 16:20:37.013200    8599 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 16:20:37.014635    8599 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 16:20:37.016001    8599 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 16:20:37.040252    8599 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1019 16:20:37.040410    8599 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 16:20:37.101098    8599 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-10-19 16:20:37.089254389 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 16:20:37.101199    8599 docker.go:319] overlay module found
	I1019 16:20:37.102914    8599 out.go:179] * Using the docker driver based on user configuration
	I1019 16:20:37.104249    8599 start.go:309] selected driver: docker
	I1019 16:20:37.104264    8599 start.go:930] validating driver "docker" against <nil>
	I1019 16:20:37.104276    8599 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 16:20:37.104878    8599 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 16:20:37.161990    8599 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-10-19 16:20:37.151615083 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 16:20:37.162190    8599 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1019 16:20:37.162402    8599 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 16:20:37.164311    8599 out.go:179] * Using Docker driver with root privileges
	I1019 16:20:37.165525    8599 cni.go:84] Creating CNI manager for ""
	I1019 16:20:37.165585    8599 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 16:20:37.165595    8599 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1019 16:20:37.165690    8599 start.go:353] cluster config:
	{Name:addons-557770 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-557770 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1019 16:20:37.167138    8599 out.go:179] * Starting "addons-557770" primary control-plane node in "addons-557770" cluster
	I1019 16:20:37.168420    8599 cache.go:124] Beginning downloading kic base image for docker with crio
	I1019 16:20:37.169584    8599 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 16:20:37.170735    8599 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 16:20:37.170776    8599 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-3731/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1019 16:20:37.170783    8599 cache.go:59] Caching tarball of preloaded images
	I1019 16:20:37.170858    8599 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 16:20:37.170928    8599 preload.go:233] Found /home/jenkins/minikube-integration/21683-3731/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1019 16:20:37.170941    8599 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 16:20:37.171275    8599 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/config.json ...
	I1019 16:20:37.171300    8599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/config.json: {Name:mk0b880f81c44948ba924d3b86e3229bc276fcc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:20:37.187758    8599 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1019 16:20:37.187924    8599 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory
	I1019 16:20:37.187944    8599 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory, skipping pull
	I1019 16:20:37.187949    8599 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in cache, skipping pull
	I1019 16:20:37.187956    8599 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 as a tarball
	I1019 16:20:37.187964    8599 cache.go:166] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from local cache
	I1019 16:20:49.559891    8599 cache.go:168] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from cached tarball
	I1019 16:20:49.559937    8599 cache.go:233] Successfully downloaded all kic artifacts
	I1019 16:20:49.560008    8599 start.go:360] acquireMachinesLock for addons-557770: {Name:mkd8c0d521d8e4e2b3309f4cceb29802c8ff5ea5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 16:20:49.560147    8599 start.go:364] duration metric: took 118.204µs to acquireMachinesLock for "addons-557770"
	I1019 16:20:49.560181    8599 start.go:93] Provisioning new machine with config: &{Name:addons-557770 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-557770 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 16:20:49.560268    8599 start.go:125] createHost starting for "" (driver="docker")
	I1019 16:20:49.562195    8599 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1019 16:20:49.562417    8599 start.go:159] libmachine.API.Create for "addons-557770" (driver="docker")
	I1019 16:20:49.562451    8599 client.go:171] LocalClient.Create starting
	I1019 16:20:49.562566    8599 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem
	I1019 16:20:49.664253    8599 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/cert.pem
	I1019 16:20:49.844256    8599 cli_runner.go:164] Run: docker network inspect addons-557770 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1019 16:20:49.861774    8599 cli_runner.go:211] docker network inspect addons-557770 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1019 16:20:49.861858    8599 network_create.go:284] running [docker network inspect addons-557770] to gather additional debugging logs...
	I1019 16:20:49.861883    8599 cli_runner.go:164] Run: docker network inspect addons-557770
	W1019 16:20:49.879492    8599 cli_runner.go:211] docker network inspect addons-557770 returned with exit code 1
	I1019 16:20:49.879519    8599 network_create.go:287] error running [docker network inspect addons-557770]: docker network inspect addons-557770: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-557770 not found
	I1019 16:20:49.879530    8599 network_create.go:289] output of [docker network inspect addons-557770]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-557770 not found
	
	** /stderr **
	I1019 16:20:49.879612    8599 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 16:20:49.897397    8599 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d9fd90}
	I1019 16:20:49.897430    8599 network_create.go:124] attempt to create docker network addons-557770 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1019 16:20:49.897470    8599 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-557770 addons-557770
	I1019 16:20:49.956189    8599 network_create.go:108] docker network addons-557770 192.168.49.0/24 created
	I1019 16:20:49.956236    8599 kic.go:121] calculated static IP "192.168.49.2" for the "addons-557770" container
	I1019 16:20:49.956300    8599 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1019 16:20:49.973143    8599 cli_runner.go:164] Run: docker volume create addons-557770 --label name.minikube.sigs.k8s.io=addons-557770 --label created_by.minikube.sigs.k8s.io=true
	I1019 16:20:49.992160    8599 oci.go:103] Successfully created a docker volume addons-557770
	I1019 16:20:49.992248    8599 cli_runner.go:164] Run: docker run --rm --name addons-557770-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-557770 --entrypoint /usr/bin/test -v addons-557770:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1019 16:20:56.849519    8599 cli_runner.go:217] Completed: docker run --rm --name addons-557770-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-557770 --entrypoint /usr/bin/test -v addons-557770:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib: (6.857232984s)
	I1019 16:20:56.849551    8599 oci.go:107] Successfully prepared a docker volume addons-557770
	I1019 16:20:56.849601    8599 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 16:20:56.849630    8599 kic.go:194] Starting extracting preloaded images to volume ...
	I1019 16:20:56.849688    8599 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-3731/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-557770:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1019 16:21:01.341030    8599 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-3731/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-557770:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.491302405s)
	I1019 16:21:01.341060    8599 kic.go:203] duration metric: took 4.491428098s to extract preloaded images to volume ...
	W1019 16:21:01.341184    8599 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1019 16:21:01.341228    8599 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1019 16:21:01.341285    8599 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1019 16:21:01.400081    8599 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-557770 --name addons-557770 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-557770 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-557770 --network addons-557770 --ip 192.168.49.2 --volume addons-557770:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1019 16:21:01.697891    8599 cli_runner.go:164] Run: docker container inspect addons-557770 --format={{.State.Running}}
	I1019 16:21:01.719050    8599 cli_runner.go:164] Run: docker container inspect addons-557770 --format={{.State.Status}}
	I1019 16:21:01.738178    8599 cli_runner.go:164] Run: docker exec addons-557770 stat /var/lib/dpkg/alternatives/iptables
	I1019 16:21:01.792492    8599 oci.go:144] the created container "addons-557770" has a running status.
	I1019 16:21:01.792526    8599 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-3731/.minikube/machines/addons-557770/id_rsa...
	I1019 16:21:01.997323    8599 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-3731/.minikube/machines/addons-557770/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1019 16:21:02.033794    8599 cli_runner.go:164] Run: docker container inspect addons-557770 --format={{.State.Status}}
	I1019 16:21:02.057920    8599 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1019 16:21:02.057947    8599 kic_runner.go:114] Args: [docker exec --privileged addons-557770 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1019 16:21:02.114224    8599 cli_runner.go:164] Run: docker container inspect addons-557770 --format={{.State.Status}}
	I1019 16:21:02.134225    8599 machine.go:94] provisionDockerMachine start ...
	I1019 16:21:02.134328    8599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:21:02.153843    8599 main.go:143] libmachine: Using SSH client type: native
	I1019 16:21:02.154133    8599 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1019 16:21:02.154152    8599 main.go:143] libmachine: About to run SSH command:
	hostname
	I1019 16:21:02.289964    8599 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-557770
	
	I1019 16:21:02.289999    8599 ubuntu.go:182] provisioning hostname "addons-557770"
	I1019 16:21:02.290086    8599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:21:02.308125    8599 main.go:143] libmachine: Using SSH client type: native
	I1019 16:21:02.308371    8599 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1019 16:21:02.308386    8599 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-557770 && echo "addons-557770" | sudo tee /etc/hostname
	I1019 16:21:02.453445    8599 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-557770
	
	I1019 16:21:02.453519    8599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:21:02.471453    8599 main.go:143] libmachine: Using SSH client type: native
	I1019 16:21:02.471701    8599 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1019 16:21:02.471728    8599 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-557770' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-557770/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-557770' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 16:21:02.604965    8599 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1019 16:21:02.605010    8599 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-3731/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-3731/.minikube}
	I1019 16:21:02.605044    8599 ubuntu.go:190] setting up certificates
	I1019 16:21:02.605057    8599 provision.go:84] configureAuth start
	I1019 16:21:02.605124    8599 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-557770
	I1019 16:21:02.623212    8599 provision.go:143] copyHostCerts
	I1019 16:21:02.623301    8599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-3731/.minikube/ca.pem (1082 bytes)
	I1019 16:21:02.623432    8599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-3731/.minikube/cert.pem (1123 bytes)
	I1019 16:21:02.623515    8599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-3731/.minikube/key.pem (1679 bytes)
	I1019 16:21:02.623578    8599 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-3731/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca-key.pem org=jenkins.addons-557770 san=[127.0.0.1 192.168.49.2 addons-557770 localhost minikube]
	I1019 16:21:03.287724    8599 provision.go:177] copyRemoteCerts
	I1019 16:21:03.287784    8599 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 16:21:03.287817    8599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:21:03.306256    8599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/addons-557770/id_rsa Username:docker}
	I1019 16:21:03.402262    8599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1019 16:21:03.421832    8599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1019 16:21:03.439279    8599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 16:21:03.456563    8599 provision.go:87] duration metric: took 851.495239ms to configureAuth
	I1019 16:21:03.456591    8599 ubuntu.go:206] setting minikube options for container-runtime
	I1019 16:21:03.456795    8599 config.go:182] Loaded profile config "addons-557770": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:21:03.456897    8599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:21:03.474939    8599 main.go:143] libmachine: Using SSH client type: native
	I1019 16:21:03.475169    8599 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1019 16:21:03.475189    8599 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 16:21:03.723061    8599 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 16:21:03.723101    8599 machine.go:97] duration metric: took 1.58885289s to provisionDockerMachine
	I1019 16:21:03.723114    8599 client.go:174] duration metric: took 14.160654765s to LocalClient.Create
	I1019 16:21:03.723133    8599 start.go:167] duration metric: took 14.160717768s to libmachine.API.Create "addons-557770"
	I1019 16:21:03.723153    8599 start.go:293] postStartSetup for "addons-557770" (driver="docker")
	I1019 16:21:03.723164    8599 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 16:21:03.723222    8599 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 16:21:03.723258    8599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:21:03.741775    8599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/addons-557770/id_rsa Username:docker}
	I1019 16:21:03.841359    8599 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 16:21:03.845057    8599 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 16:21:03.845100    8599 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 16:21:03.845113    8599 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-3731/.minikube/addons for local assets ...
	I1019 16:21:03.845177    8599 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-3731/.minikube/files for local assets ...
	I1019 16:21:03.845203    8599 start.go:296] duration metric: took 122.044139ms for postStartSetup
	I1019 16:21:03.845531    8599 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-557770
	I1019 16:21:03.863610    8599 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/config.json ...
	I1019 16:21:03.863926    8599 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 16:21:03.863978    8599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:21:03.881731    8599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/addons-557770/id_rsa Username:docker}
	I1019 16:21:03.975202    8599 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 16:21:03.979826    8599 start.go:128] duration metric: took 14.419541469s to createHost
	I1019 16:21:03.979856    8599 start.go:83] releasing machines lock for "addons-557770", held for 14.419693478s
	I1019 16:21:03.979929    8599 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-557770
	I1019 16:21:03.997689    8599 ssh_runner.go:195] Run: cat /version.json
	I1019 16:21:03.997737    8599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:21:03.997782    8599 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 16:21:03.997849    8599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:21:04.017608    8599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/addons-557770/id_rsa Username:docker}
	I1019 16:21:04.017960    8599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/addons-557770/id_rsa Username:docker}
	I1019 16:21:04.169611    8599 ssh_runner.go:195] Run: systemctl --version
	I1019 16:21:04.176363    8599 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 16:21:04.211541    8599 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 16:21:04.216378    8599 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 16:21:04.216447    8599 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 16:21:04.243460    8599 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1019 16:21:04.243488    8599 start.go:496] detecting cgroup driver to use...
	I1019 16:21:04.243525    8599 detect.go:190] detected "systemd" cgroup driver on host os
	I1019 16:21:04.243579    8599 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 16:21:04.259311    8599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 16:21:04.272223    8599 docker.go:218] disabling cri-docker service (if available) ...
	I1019 16:21:04.272282    8599 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 16:21:04.288861    8599 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 16:21:04.306562    8599 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 16:21:04.389940    8599 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 16:21:04.476397    8599 docker.go:234] disabling docker service ...
	I1019 16:21:04.476473    8599 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 16:21:04.494901    8599 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 16:21:04.508316    8599 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 16:21:04.596295    8599 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 16:21:04.677663    8599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 16:21:04.690649    8599 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 16:21:04.705705    8599 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 16:21:04.705772    8599 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 16:21:04.716248    8599 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1019 16:21:04.716318    8599 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 16:21:04.725596    8599 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 16:21:04.734626    8599 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 16:21:04.743880    8599 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 16:21:04.752558    8599 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 16:21:04.762018    8599 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 16:21:04.775951    8599 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 16:21:04.784950    8599 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 16:21:04.792686    8599 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1019 16:21:04.792738    8599 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1019 16:21:04.805905    8599 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 16:21:04.814179    8599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 16:21:04.888360    8599 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 16:21:04.989451    8599 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 16:21:04.989558    8599 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 16:21:04.993651    8599 start.go:564] Will wait 60s for crictl version
	I1019 16:21:04.993722    8599 ssh_runner.go:195] Run: which crictl
	I1019 16:21:04.997411    8599 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 16:21:05.021517    8599 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 16:21:05.021636    8599 ssh_runner.go:195] Run: crio --version
	I1019 16:21:05.049171    8599 ssh_runner.go:195] Run: crio --version
	I1019 16:21:05.078433    8599 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1019 16:21:05.079880    8599 cli_runner.go:164] Run: docker network inspect addons-557770 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 16:21:05.097340    8599 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1019 16:21:05.101392    8599 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 16:21:05.112285    8599 kubeadm.go:884] updating cluster {Name:addons-557770 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-557770 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 16:21:05.112418    8599 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 16:21:05.112468    8599 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 16:21:05.144315    8599 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 16:21:05.144339    8599 crio.go:433] Images already preloaded, skipping extraction
	I1019 16:21:05.144411    8599 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 16:21:05.170024    8599 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 16:21:05.170047    8599 cache_images.go:86] Images are preloaded, skipping loading
	I1019 16:21:05.170055    8599 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1019 16:21:05.170162    8599 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-557770 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-557770 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 16:21:05.170239    8599 ssh_runner.go:195] Run: crio config
	I1019 16:21:05.215127    8599 cni.go:84] Creating CNI manager for ""
	I1019 16:21:05.215151    8599 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 16:21:05.215167    8599 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 16:21:05.215187    8599 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-557770 NodeName:addons-557770 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 16:21:05.215323    8599 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-557770"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 16:21:05.215378    8599 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 16:21:05.223279    8599 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 16:21:05.223344    8599 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 16:21:05.231525    8599 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1019 16:21:05.244464    8599 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 16:21:05.259865    8599 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1019 16:21:05.272922    8599 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1019 16:21:05.276762    8599 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 16:21:05.287045    8599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 16:21:05.362385    8599 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 16:21:05.388668    8599 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770 for IP: 192.168.49.2
	I1019 16:21:05.388697    8599 certs.go:195] generating shared ca certs ...
	I1019 16:21:05.388719    8599 certs.go:227] acquiring lock for ca certs: {Name:mk4c4a2dfd94a54e3626e99fce4b6f5183eeaf4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:21:05.388856    8599 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-3731/.minikube/ca.key
	I1019 16:21:05.763533    8599 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-3731/.minikube/ca.crt ...
	I1019 16:21:05.763564    8599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/ca.crt: {Name:mk44f8e3a76dd83cca35327978860564665e7c28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:21:05.763742    8599 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-3731/.minikube/ca.key ...
	I1019 16:21:05.763759    8599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/ca.key: {Name:mk431f409d1be8f924b8d1e3de8f01ef81484ff1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:21:05.763837    8599 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-3731/.minikube/proxy-client-ca.key
	I1019 16:21:06.038748    8599 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-3731/.minikube/proxy-client-ca.crt ...
	I1019 16:21:06.038784    8599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/proxy-client-ca.crt: {Name:mk366c71806b79180d7079a88d65e6419023392d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:21:06.038955    8599 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-3731/.minikube/proxy-client-ca.key ...
	I1019 16:21:06.038967    8599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/proxy-client-ca.key: {Name:mk855f6d3642997c9f92dc72ec5c319a8fccbf7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:21:06.039040    8599 certs.go:257] generating profile certs ...
	I1019 16:21:06.039118    8599 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/client.key
	I1019 16:21:06.039139    8599 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/client.crt with IP's: []
	I1019 16:21:06.247192    8599 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/client.crt ...
	I1019 16:21:06.247222    8599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/client.crt: {Name:mk438331dfa0d6b49c8f56c3992fd1b0c789d59a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:21:06.247394    8599 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/client.key ...
	I1019 16:21:06.247406    8599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/client.key: {Name:mkc6ed2572f7106eb844bc591483dde318b77cb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:21:06.247485    8599 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/apiserver.key.e8f8bc08
	I1019 16:21:06.247503    8599 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/apiserver.crt.e8f8bc08 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1019 16:21:06.452505    8599 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/apiserver.crt.e8f8bc08 ...
	I1019 16:21:06.452537    8599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/apiserver.crt.e8f8bc08: {Name:mk61b8e1a223d3350c0d71f06d27dd73bbc319e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:21:06.452713    8599 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/apiserver.key.e8f8bc08 ...
	I1019 16:21:06.452725    8599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/apiserver.key.e8f8bc08: {Name:mkbbb235b94daaa2d21108ad873fa041c1e4d991 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:21:06.452806    8599 certs.go:382] copying /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/apiserver.crt.e8f8bc08 -> /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/apiserver.crt
	I1019 16:21:06.452893    8599 certs.go:386] copying /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/apiserver.key.e8f8bc08 -> /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/apiserver.key
	I1019 16:21:06.452940    8599 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/proxy-client.key
	I1019 16:21:06.452958    8599 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/proxy-client.crt with IP's: []
	I1019 16:21:06.899042    8599 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/proxy-client.crt ...
	I1019 16:21:06.899081    8599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/proxy-client.crt: {Name:mk68d1d4e27c342b829886fdb40b43beef811c1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:21:06.899247    8599 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/proxy-client.key ...
	I1019 16:21:06.899258    8599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/proxy-client.key: {Name:mke44c82e24a4c54aecb289324ed9b282d52ebad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:21:06.899454    8599 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca-key.pem (1675 bytes)
	I1019 16:21:06.899489    8599 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem (1082 bytes)
	I1019 16:21:06.899512    8599 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/cert.pem (1123 bytes)
	I1019 16:21:06.899540    8599 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/key.pem (1679 bytes)
	I1019 16:21:06.900130    8599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 16:21:06.918435    8599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1019 16:21:06.936440    8599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 16:21:06.954843    8599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1019 16:21:06.973869    8599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1019 16:21:06.991291    8599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1019 16:21:07.009288    8599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 16:21:07.027274    8599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1019 16:21:07.045284    8599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 16:21:07.064711    8599 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 16:21:07.077962    8599 ssh_runner.go:195] Run: openssl version
	I1019 16:21:07.084166    8599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 16:21:07.097092    8599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 16:21:07.101047    8599 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 16:21 /usr/share/ca-certificates/minikubeCA.pem
	I1019 16:21:07.101118    8599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 16:21:07.135518    8599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 16:21:07.144765    8599 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 16:21:07.148436    8599 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1019 16:21:07.148482    8599 kubeadm.go:401] StartCluster: {Name:addons-557770 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-557770 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 16:21:07.148556    8599 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 16:21:07.148599    8599 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 16:21:07.174719    8599 cri.go:89] found id: ""
	I1019 16:21:07.174782    8599 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 16:21:07.182820    8599 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1019 16:21:07.190899    8599 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1019 16:21:07.190969    8599 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1019 16:21:07.199000    8599 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1019 16:21:07.199018    8599 kubeadm.go:158] found existing configuration files:
	
	I1019 16:21:07.199091    8599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1019 16:21:07.206784    8599 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1019 16:21:07.206838    8599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1019 16:21:07.214703    8599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1019 16:21:07.222641    8599 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1019 16:21:07.222727    8599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1019 16:21:07.230327    8599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1019 16:21:07.237984    8599 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1019 16:21:07.238044    8599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1019 16:21:07.245796    8599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1019 16:21:07.253554    8599 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1019 16:21:07.253605    8599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1019 16:21:07.260985    8599 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1019 16:21:07.296707    8599 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1019 16:21:07.296783    8599 kubeadm.go:319] [preflight] Running pre-flight checks
	I1019 16:21:07.318802    8599 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1019 16:21:07.318895    8599 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1041-gcp
	I1019 16:21:07.318944    8599 kubeadm.go:319] OS: Linux
	I1019 16:21:07.319015    8599 kubeadm.go:319] CGROUPS_CPU: enabled
	I1019 16:21:07.319057    8599 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1019 16:21:07.319140    8599 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1019 16:21:07.319186    8599 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1019 16:21:07.319225    8599 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1019 16:21:07.319263    8599 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1019 16:21:07.319363    8599 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1019 16:21:07.319429    8599 kubeadm.go:319] CGROUPS_IO: enabled
	I1019 16:21:07.376452    8599 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1019 16:21:07.376590    8599 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1019 16:21:07.376746    8599 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1019 16:21:07.386380    8599 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1019 16:21:07.388411    8599 out.go:252]   - Generating certificates and keys ...
	I1019 16:21:07.388535    8599 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1019 16:21:07.388620    8599 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1019 16:21:07.507137    8599 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1019 16:21:08.012185    8599 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1019 16:21:08.221709    8599 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1019 16:21:08.319217    8599 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1019 16:21:08.509120    8599 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1019 16:21:08.509293    8599 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-557770 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1019 16:21:08.828827    8599 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1019 16:21:08.829029    8599 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-557770 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1019 16:21:09.159860    8599 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1019 16:21:09.572535    8599 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1019 16:21:09.965134    8599 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1019 16:21:09.965258    8599 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1019 16:21:10.073234    8599 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1019 16:21:10.288320    8599 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1019 16:21:10.436040    8599 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1019 16:21:10.715794    8599 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1019 16:21:10.818023    8599 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1019 16:21:10.818501    8599 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1019 16:21:10.823565    8599 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1019 16:21:10.825188    8599 out.go:252]   - Booting up control plane ...
	I1019 16:21:10.825299    8599 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1019 16:21:10.825388    8599 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1019 16:21:10.826113    8599 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1019 16:21:10.839701    8599 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1019 16:21:10.839885    8599 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1019 16:21:10.846350    8599 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1019 16:21:10.846474    8599 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1019 16:21:10.846519    8599 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1019 16:21:10.944926    8599 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1019 16:21:10.945113    8599 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1019 16:21:11.446580    8599 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.871912ms
	I1019 16:21:11.449234    8599 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1019 16:21:11.449391    8599 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1019 16:21:11.449519    8599 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1019 16:21:11.449622    8599 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1019 16:21:12.949061    8599 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.499746023s
	I1019 16:21:13.683804    8599 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.234288977s
	I1019 16:21:15.450507    8599 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001247285s
	I1019 16:21:15.462026    8599 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1019 16:21:15.472453    8599 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1019 16:21:15.481577    8599 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1019 16:21:15.481837    8599 kubeadm.go:319] [mark-control-plane] Marking the node addons-557770 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1019 16:21:15.489415    8599 kubeadm.go:319] [bootstrap-token] Using token: 5153m7.ghqmp7zdo9wx0usq
	I1019 16:21:15.490639    8599 out.go:252]   - Configuring RBAC rules ...
	I1019 16:21:15.490779    8599 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1019 16:21:15.494010    8599 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1019 16:21:15.499759    8599 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1019 16:21:15.503047    8599 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1019 16:21:15.506182    8599 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1019 16:21:15.509626    8599 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1019 16:21:15.855970    8599 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1019 16:21:16.273537    8599 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1019 16:21:16.856153    8599 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1019 16:21:16.857106    8599 kubeadm.go:319] 
	I1019 16:21:16.857180    8599 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1019 16:21:16.857189    8599 kubeadm.go:319] 
	I1019 16:21:16.857253    8599 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1019 16:21:16.857279    8599 kubeadm.go:319] 
	I1019 16:21:16.857327    8599 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1019 16:21:16.857407    8599 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1019 16:21:16.857504    8599 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1019 16:21:16.857522    8599 kubeadm.go:319] 
	I1019 16:21:16.857599    8599 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1019 16:21:16.857608    8599 kubeadm.go:319] 
	I1019 16:21:16.857678    8599 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1019 16:21:16.857688    8599 kubeadm.go:319] 
	I1019 16:21:16.857782    8599 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1019 16:21:16.857891    8599 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1019 16:21:16.857978    8599 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1019 16:21:16.857988    8599 kubeadm.go:319] 
	I1019 16:21:16.858125    8599 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1019 16:21:16.858220    8599 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1019 16:21:16.858229    8599 kubeadm.go:319] 
	I1019 16:21:16.858323    8599 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 5153m7.ghqmp7zdo9wx0usq \
	I1019 16:21:16.858476    8599 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:861d72ec119aad1188a44160bac431ea1f9fef8d67df56e962f86d9bbf64f1d3 \
	I1019 16:21:16.858505    8599 kubeadm.go:319] 	--control-plane 
	I1019 16:21:16.858515    8599 kubeadm.go:319] 
	I1019 16:21:16.858634    8599 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1019 16:21:16.858644    8599 kubeadm.go:319] 
	I1019 16:21:16.858784    8599 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 5153m7.ghqmp7zdo9wx0usq \
	I1019 16:21:16.858965    8599 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:861d72ec119aad1188a44160bac431ea1f9fef8d67df56e962f86d9bbf64f1d3 
	I1019 16:21:16.860518    8599 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1019 16:21:16.860646    8599 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1019 16:21:16.860691    8599 cni.go:84] Creating CNI manager for ""
	I1019 16:21:16.860705    8599 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 16:21:16.863213    8599 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1019 16:21:16.864381    8599 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1019 16:21:16.868639    8599 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1019 16:21:16.868655    8599 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1019 16:21:16.882258    8599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1019 16:21:17.083995    8599 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1019 16:21:17.084135    8599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 16:21:17.084180    8599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-557770 minikube.k8s.io/updated_at=2025_10_19T16_21_17_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34 minikube.k8s.io/name=addons-557770 minikube.k8s.io/primary=true
	I1019 16:21:17.093579    8599 ops.go:34] apiserver oom_adj: -16
	I1019 16:21:17.170414    8599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 16:21:17.671280    8599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 16:21:18.170534    8599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 16:21:18.670771    8599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 16:21:19.171262    8599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 16:21:19.671244    8599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 16:21:20.171375    8599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 16:21:20.670845    8599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 16:21:21.170704    8599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 16:21:21.671323    8599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 16:21:21.737075    8599 kubeadm.go:1114] duration metric: took 4.652977892s to wait for elevateKubeSystemPrivileges
	I1019 16:21:21.737117    8599 kubeadm.go:403] duration metric: took 14.588636179s to StartCluster
	I1019 16:21:21.737142    8599 settings.go:142] acquiring lock: {Name:mk205bd8d663b4a1850184d0f589700c0d2429b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:21:21.737266    8599 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-3731/kubeconfig
	I1019 16:21:21.737725    8599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/kubeconfig: {Name:mk4cdfafbafeae33568865a60cf929c656d55c5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:21:21.738906    8599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1019 16:21:21.738923    8599 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 16:21:21.739015    8599 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1019 16:21:21.739135    8599 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-557770"
	I1019 16:21:21.739157    8599 addons.go:70] Setting yakd=true in profile "addons-557770"
	I1019 16:21:21.739189    8599 addons.go:239] Setting addon yakd=true in "addons-557770"
	I1019 16:21:21.739198    8599 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-557770"
	I1019 16:21:21.739188    8599 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-557770"
	I1019 16:21:21.739218    8599 host.go:66] Checking if "addons-557770" exists ...
	I1019 16:21:21.739223    8599 addons.go:70] Setting cloud-spanner=true in profile "addons-557770"
	I1019 16:21:21.739222    8599 config.go:182] Loaded profile config "addons-557770": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:21:21.739228    8599 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-557770"
	I1019 16:21:21.739234    8599 addons.go:239] Setting addon cloud-spanner=true in "addons-557770"
	I1019 16:21:21.739249    8599 host.go:66] Checking if "addons-557770" exists ...
	I1019 16:21:21.739225    8599 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-557770"
	I1019 16:21:21.739252    8599 addons.go:70] Setting storage-provisioner=true in profile "addons-557770"
	I1019 16:21:21.739291    8599 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-557770"
	I1019 16:21:21.739295    8599 host.go:66] Checking if "addons-557770" exists ...
	I1019 16:21:21.739303    8599 addons.go:239] Setting addon storage-provisioner=true in "addons-557770"
	I1019 16:21:21.739343    8599 host.go:66] Checking if "addons-557770" exists ...
	I1019 16:21:21.739218    8599 host.go:66] Checking if "addons-557770" exists ...
	I1019 16:21:21.739350    8599 host.go:66] Checking if "addons-557770" exists ...
	I1019 16:21:21.739391    8599 addons.go:70] Setting registry=true in profile "addons-557770"
	I1019 16:21:21.739409    8599 addons.go:239] Setting addon registry=true in "addons-557770"
	I1019 16:21:21.739421    8599 addons.go:70] Setting ingress=true in profile "addons-557770"
	I1019 16:21:21.739938    8599 addons.go:70] Setting volcano=true in profile "addons-557770"
	I1019 16:21:21.739955    8599 addons.go:239] Setting addon ingress=true in "addons-557770"
	I1019 16:21:21.739951    8599 addons.go:70] Setting default-storageclass=true in profile "addons-557770"
	I1019 16:21:21.739974    8599 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-557770"
	I1019 16:21:21.739978    8599 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-557770"
	I1019 16:21:21.740512    8599 cli_runner.go:164] Run: docker container inspect addons-557770 --format={{.State.Status}}
	I1019 16:21:21.740559    8599 cli_runner.go:164] Run: docker container inspect addons-557770 --format={{.State.Status}}
	I1019 16:21:21.740618    8599 cli_runner.go:164] Run: docker container inspect addons-557770 --format={{.State.Status}}
	I1019 16:21:21.740764    8599 cli_runner.go:164] Run: docker container inspect addons-557770 --format={{.State.Status}}
	I1019 16:21:21.739964    8599 addons.go:239] Setting addon volcano=true in "addons-557770"
	I1019 16:21:21.740847    8599 host.go:66] Checking if "addons-557770" exists ...
	I1019 16:21:21.741151    8599 cli_runner.go:164] Run: docker container inspect addons-557770 --format={{.State.Status}}
	I1019 16:21:21.741401    8599 cli_runner.go:164] Run: docker container inspect addons-557770 --format={{.State.Status}}
	I1019 16:21:21.743047    8599 cli_runner.go:164] Run: docker container inspect addons-557770 --format={{.State.Status}}
	I1019 16:21:21.739847    8599 host.go:66] Checking if "addons-557770" exists ...
	I1019 16:21:21.744192    8599 cli_runner.go:164] Run: docker container inspect addons-557770 --format={{.State.Status}}
	I1019 16:21:21.739914    8599 addons.go:70] Setting inspektor-gadget=true in profile "addons-557770"
	I1019 16:21:21.745037    8599 addons.go:239] Setting addon inspektor-gadget=true in "addons-557770"
	I1019 16:21:21.745095    8599 host.go:66] Checking if "addons-557770" exists ...
	I1019 16:21:21.745653    8599 cli_runner.go:164] Run: docker container inspect addons-557770 --format={{.State.Status}}
	I1019 16:21:21.739921    8599 addons.go:70] Setting metrics-server=true in profile "addons-557770"
	I1019 16:21:21.746912    8599 addons.go:239] Setting addon metrics-server=true in "addons-557770"
	I1019 16:21:21.746944    8599 host.go:66] Checking if "addons-557770" exists ...
	I1019 16:21:21.747116    8599 out.go:179] * Verifying Kubernetes components...
	I1019 16:21:21.747346    8599 cli_runner.go:164] Run: docker container inspect addons-557770 --format={{.State.Status}}
	I1019 16:21:21.739991    8599 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-557770"
	I1019 16:21:21.739780    8599 addons.go:70] Setting ingress-dns=true in profile "addons-557770"
	I1019 16:21:21.740011    8599 addons.go:70] Setting gcp-auth=true in profile "addons-557770"
	I1019 16:21:21.740011    8599 addons.go:70] Setting registry-creds=true in profile "addons-557770"
	I1019 16:21:21.740049    8599 addons.go:70] Setting volumesnapshots=true in profile "addons-557770"
	I1019 16:21:21.740129    8599 host.go:66] Checking if "addons-557770" exists ...
	I1019 16:21:21.748578    8599 cli_runner.go:164] Run: docker container inspect addons-557770 --format={{.State.Status}}
	I1019 16:21:21.749437    8599 mustload.go:66] Loading cluster: addons-557770
	I1019 16:21:21.749617    8599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 16:21:21.749722    8599 config.go:182] Loaded profile config "addons-557770": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:21:21.750048    8599 cli_runner.go:164] Run: docker container inspect addons-557770 --format={{.State.Status}}
	I1019 16:21:21.750137    8599 cli_runner.go:164] Run: docker container inspect addons-557770 --format={{.State.Status}}
	I1019 16:21:21.751946    8599 addons.go:239] Setting addon ingress-dns=true in "addons-557770"
	I1019 16:21:21.752029    8599 host.go:66] Checking if "addons-557770" exists ...
	I1019 16:21:21.753243    8599 addons.go:239] Setting addon registry-creds=true in "addons-557770"
	I1019 16:21:21.753268    8599 addons.go:239] Setting addon volumesnapshots=true in "addons-557770"
	I1019 16:21:21.753285    8599 host.go:66] Checking if "addons-557770" exists ...
	I1019 16:21:21.753314    8599 host.go:66] Checking if "addons-557770" exists ...
	I1019 16:21:21.753975    8599 cli_runner.go:164] Run: docker container inspect addons-557770 --format={{.State.Status}}
	I1019 16:21:21.768532    8599 cli_runner.go:164] Run: docker container inspect addons-557770 --format={{.State.Status}}
	I1019 16:21:21.774113    8599 cli_runner.go:164] Run: docker container inspect addons-557770 --format={{.State.Status}}
	I1019 16:21:21.774304    8599 cli_runner.go:164] Run: docker container inspect addons-557770 --format={{.State.Status}}
	I1019 16:21:21.795770    8599 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1019 16:21:21.797396    8599 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1019 16:21:21.799206    8599 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1019 16:21:21.799296    8599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1019 16:21:21.799392    8599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:21:21.801774    8599 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1019 16:21:21.805322    8599 addons.go:239] Setting addon default-storageclass=true in "addons-557770"
	I1019 16:21:21.805372    8599 host.go:66] Checking if "addons-557770" exists ...
	I1019 16:21:21.805881    8599 cli_runner.go:164] Run: docker container inspect addons-557770 --format={{.State.Status}}
	I1019 16:21:21.806058    8599 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1019 16:21:21.807212    8599 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1019 16:21:21.808273    8599 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	W1019 16:21:21.808318    8599 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1019 16:21:21.810418    8599 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1019 16:21:21.811876    8599 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 16:21:21.812332    8599 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1019 16:21:21.813620    8599 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 16:21:21.813675    8599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 16:21:21.813811    8599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:21:21.816647    8599 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1019 16:21:21.817961    8599 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1019 16:21:21.818007    8599 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1019 16:21:21.818360    8599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:21:21.834902    8599 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1019 16:21:21.843463    8599 addons.go:436] installing /etc/kubernetes/addons/ig-crd.yaml
	I1019 16:21:21.843491    8599 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1019 16:21:21.843568    8599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:21:21.849613    8599 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1019 16:21:21.849832    8599 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1019 16:21:21.851171    8599 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1019 16:21:21.851193    8599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1019 16:21:21.851267    8599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:21:21.851691    8599 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1019 16:21:21.851873    8599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1019 16:21:21.851938    8599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:21:21.853773    8599 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-557770"
	I1019 16:21:21.853823    8599 host.go:66] Checking if "addons-557770" exists ...
	I1019 16:21:21.854313    8599 cli_runner.go:164] Run: docker container inspect addons-557770 --format={{.State.Status}}
	I1019 16:21:21.854442    8599 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1019 16:21:21.855750    8599 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1019 16:21:21.855771    8599 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1019 16:21:21.855837    8599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:21:21.861122    8599 host.go:66] Checking if "addons-557770" exists ...
	I1019 16:21:21.865062    8599 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1019 16:21:21.866423    8599 out.go:179]   - Using image docker.io/registry:3.0.0
	I1019 16:21:21.867508    8599 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1019 16:21:21.867529    8599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1019 16:21:21.867590    8599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:21:21.876235    8599 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1019 16:21:21.877934    8599 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1019 16:21:21.877964    8599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1019 16:21:21.878040    8599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:21:21.884105    8599 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1019 16:21:21.885295    8599 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1019 16:21:21.885320    8599 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1019 16:21:21.885384    8599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:21:21.896231    8599 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1019 16:21:21.901312    8599 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1019 16:21:21.901337    8599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1019 16:21:21.901402    8599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:21:21.908975    8599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/addons-557770/id_rsa Username:docker}
	I1019 16:21:21.909442    8599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/addons-557770/id_rsa Username:docker}
	I1019 16:21:21.912583    8599 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1019 16:21:21.912828    8599 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 16:21:21.914665    8599 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 16:21:21.914745    8599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:21:21.916578    8599 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1019 16:21:21.917952    8599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/addons-557770/id_rsa Username:docker}
	I1019 16:21:21.927691    8599 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1019 16:21:21.929042    8599 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1019 16:21:21.929115    8599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1019 16:21:21.929186    8599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:21:21.929970    8599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1019 16:21:21.931019    8599 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1019 16:21:21.936265    8599 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1019 16:21:21.936293    8599 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1019 16:21:21.936362    8599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:21:21.938700    8599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/addons-557770/id_rsa Username:docker}
	I1019 16:21:21.938709    8599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/addons-557770/id_rsa Username:docker}
	I1019 16:21:21.944725    8599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/addons-557770/id_rsa Username:docker}
	I1019 16:21:21.947378    8599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/addons-557770/id_rsa Username:docker}
	I1019 16:21:21.953516    8599 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 16:21:21.956325    8599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/addons-557770/id_rsa Username:docker}
	I1019 16:21:21.956997    8599 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1019 16:21:21.958324    8599 out.go:179]   - Using image docker.io/busybox:stable
	I1019 16:21:21.959502    8599 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1019 16:21:21.959574    8599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1019 16:21:21.959677    8599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:21:21.961432    8599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/addons-557770/id_rsa Username:docker}
	I1019 16:21:21.976654    8599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/addons-557770/id_rsa Username:docker}
	I1019 16:21:21.977275    8599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/addons-557770/id_rsa Username:docker}
	W1019 16:21:21.985010    8599 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1019 16:21:21.985052    8599 retry.go:31] will retry after 269.680961ms: ssh: handshake failed: EOF
	I1019 16:21:21.985128    8599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/addons-557770/id_rsa Username:docker}
	I1019 16:21:21.993541    8599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/addons-557770/id_rsa Username:docker}
	I1019 16:21:21.993805    8599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/addons-557770/id_rsa Username:docker}
	I1019 16:21:21.999726    8599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/addons-557770/id_rsa Username:docker}
	I1019 16:21:22.089692    8599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1019 16:21:22.093735    8599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 16:21:22.113821    8599 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1019 16:21:22.113846    8599 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1019 16:21:22.129497    8599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 16:21:22.129817    8599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1019 16:21:22.134542    8599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1019 16:21:22.137489    8599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1019 16:21:22.144615    8599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1019 16:21:22.152505    8599 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1019 16:21:22.152545    8599 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1019 16:21:22.163978    8599 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:21:22.164005    8599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1019 16:21:22.174929    8599 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1019 16:21:22.174967    8599 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1019 16:21:22.177749    8599 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1019 16:21:22.177850    8599 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1019 16:21:22.178779    8599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1019 16:21:22.182794    8599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1019 16:21:22.193240    8599 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1019 16:21:22.193263    8599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1019 16:21:22.223635    8599 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1019 16:21:22.223667    8599 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1019 16:21:22.224822    8599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:21:22.227208    8599 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1019 16:21:22.227306    8599 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1019 16:21:22.233227    8599 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1019 16:21:22.233277    8599 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1019 16:21:22.260624    8599 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1019 16:21:22.260652    8599 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1019 16:21:22.301909    8599 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1019 16:21:22.302111    8599 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1019 16:21:22.302149    8599 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1019 16:21:22.302201    8599 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1019 16:21:22.330939    8599 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1019 16:21:22.330963    8599 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1019 16:21:22.344440    8599 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1019 16:21:22.344464    8599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1019 16:21:22.369678    8599 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1019 16:21:22.369775    8599 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1019 16:21:22.372840    8599 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1019 16:21:22.374692    8599 node_ready.go:35] waiting up to 6m0s for node "addons-557770" to be "Ready" ...
	I1019 16:21:22.375171    8599 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1019 16:21:22.375231    8599 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1019 16:21:22.396500    8599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1019 16:21:22.411451    8599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1019 16:21:22.437855    8599 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1019 16:21:22.437967    8599 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1019 16:21:22.463694    8599 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1019 16:21:22.463722    8599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1019 16:21:22.489516    8599 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1019 16:21:22.489551    8599 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1019 16:21:22.514670    8599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1019 16:21:22.524560    8599 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1019 16:21:22.524588    8599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1019 16:21:22.550477    8599 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1019 16:21:22.550501    8599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1019 16:21:22.580578    8599 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1019 16:21:22.580620    8599 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1019 16:21:22.615656    8599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1019 16:21:22.648366    8599 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1019 16:21:22.648411    8599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1019 16:21:22.696862    8599 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1019 16:21:22.696893    8599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1019 16:21:22.751350    8599 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1019 16:21:22.751392    8599 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1019 16:21:22.807702    8599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1019 16:21:22.877642    8599 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-557770" context rescaled to 1 replicas
	I1019 16:21:23.430793    8599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.247961964s)
	I1019 16:21:23.430931    8599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.206015493s)
	W1019 16:21:23.430954    8599 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:21:23.430971    8599 retry.go:31] will retry after 334.626482ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:21:23.431046    8599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.034437139s)
	I1019 16:21:23.431077    8599 addons.go:480] Verifying addon metrics-server=true in "addons-557770"
	I1019 16:21:23.431135    8599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.019576953s)
	I1019 16:21:23.431362    8599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.252554566s)
	I1019 16:21:23.431403    8599 addons.go:480] Verifying addon ingress=true in "addons-557770"
	I1019 16:21:23.433386    8599 out.go:179] * Verifying ingress addon...
	I1019 16:21:23.433405    8599 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-557770 service yakd-dashboard -n yakd-dashboard
	
	I1019 16:21:23.436521    8599 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1019 16:21:23.439331    8599 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1019 16:21:23.439354    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:23.766211    8599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:21:23.839635    8599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.324916614s)
	W1019 16:21:23.839688    8599 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1019 16:21:23.839712    8599 retry.go:31] will retry after 177.211195ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1019 16:21:23.839740    8599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.223998911s)
	I1019 16:21:23.839772    8599 addons.go:480] Verifying addon registry=true in "addons-557770"
	I1019 16:21:23.839907    8599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.032167186s)
	I1019 16:21:23.839940    8599 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-557770"
	I1019 16:21:23.841504    8599 out.go:179] * Verifying csi-hostpath-driver addon...
	I1019 16:21:23.841523    8599 out.go:179] * Verifying registry addon...
	I1019 16:21:23.844121    8599 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1019 16:21:23.844178    8599 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1019 16:21:23.847540    8599 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1019 16:21:23.847583    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:23.848747    8599 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1019 16:21:23.848769    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:23.948229    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:24.017324    8599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1019 16:21:24.347182    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:24.347241    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1019 16:21:24.364329    8599 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:21:24.364365    8599 retry.go:31] will retry after 522.75767ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1019 16:21:24.377945    8599 node_ready.go:57] node "addons-557770" has "Ready":"False" status (will retry)
	I1019 16:21:24.439594    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:24.847824    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:24.847873    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:24.887778    8599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:21:24.949212    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:25.347648    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:25.347776    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:25.439394    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:25.848036    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:25.848185    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:25.949360    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:26.347917    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:26.347964    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:26.449787    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:26.518856    8599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.501313649s)
	I1019 16:21:26.518938    8599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.631127797s)
	W1019 16:21:26.518976    8599 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:21:26.519000    8599 retry.go:31] will retry after 569.326931ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:21:26.847794    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:26.847841    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1019 16:21:26.877281    8599 node_ready.go:57] node "addons-557770" has "Ready":"False" status (will retry)
	I1019 16:21:26.948802    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:27.089359    8599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:21:27.348571    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:27.348571    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:27.440390    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1019 16:21:27.627530    8599 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:21:27.627562    8599 retry.go:31] will retry after 747.557854ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:21:27.847307    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:27.847423    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:27.948149    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:28.347636    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:28.347773    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:28.375847    8599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:21:28.440176    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:28.847352    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:28.847391    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 16:21:28.915645    8599 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:21:28.915681    8599 retry.go:31] will retry after 1.278947633s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:21:28.948481    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:29.347689    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:29.347730    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 16:21:29.378279    8599 node_ready.go:57] node "addons-557770" has "Ready":"False" status (will retry)
	I1019 16:21:29.440288    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:29.475383    8599 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1019 16:21:29.475448    8599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:21:29.494087    8599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/addons-557770/id_rsa Username:docker}
	I1019 16:21:29.604770    8599 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1019 16:21:29.618018    8599 addons.go:239] Setting addon gcp-auth=true in "addons-557770"
	I1019 16:21:29.618102    8599 host.go:66] Checking if "addons-557770" exists ...
	I1019 16:21:29.618457    8599 cli_runner.go:164] Run: docker container inspect addons-557770 --format={{.State.Status}}
	I1019 16:21:29.636259    8599 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1019 16:21:29.636337    8599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:21:29.654446    8599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/addons-557770/id_rsa Username:docker}
	I1019 16:21:29.749334    8599 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1019 16:21:29.750803    8599 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1019 16:21:29.752354    8599 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1019 16:21:29.752375    8599 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1019 16:21:29.766838    8599 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1019 16:21:29.766861    8599 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1019 16:21:29.780082    8599 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1019 16:21:29.780113    8599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1019 16:21:29.793552    8599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1019 16:21:29.847675    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:29.847706    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:29.939720    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:30.113966    8599 addons.go:480] Verifying addon gcp-auth=true in "addons-557770"
	I1019 16:21:30.119176    8599 out.go:179] * Verifying gcp-auth addon...
	I1019 16:21:30.121484    8599 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1019 16:21:30.127927    8599 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1019 16:21:30.127955    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:30.195085    8599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:21:30.347673    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:30.347706    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:30.440213    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:30.624819    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1019 16:21:30.730340    8599 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:21:30.730370    8599 retry.go:31] will retry after 2.40768445s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:21:30.847717    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:30.847893    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:30.940383    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:31.125201    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:31.347856    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:31.347870    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:31.440085    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:31.624701    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:31.846645    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:31.846806    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 16:21:31.877255    8599 node_ready.go:57] node "addons-557770" has "Ready":"False" status (will retry)
	I1019 16:21:31.939924    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:32.124996    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:32.347694    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:32.347795    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:32.440320    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:32.624879    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:32.847568    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:32.847596    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:32.939975    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:33.124766    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:33.138926    8599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:21:33.347196    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:33.347304    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:33.439634    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:33.624184    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1019 16:21:33.679127    8599 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:21:33.679159    8599 retry.go:31] will retry after 1.514965587s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:21:33.846780    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:33.846867    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:33.939802    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:34.124492    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:34.347536    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:34.347652    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 16:21:34.378092    8599 node_ready.go:57] node "addons-557770" has "Ready":"False" status (will retry)
	I1019 16:21:34.439515    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:34.624993    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:34.847653    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:34.847675    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:34.939749    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:35.124639    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:35.194827    8599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:21:35.347445    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:35.347554    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:35.440111    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:35.625476    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1019 16:21:35.733170    8599 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:21:35.733202    8599 retry.go:31] will retry after 5.197682713s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:21:35.846780    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:35.846791    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:35.940299    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:36.125213    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:36.346799    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:36.346958    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:36.439447    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:36.625130    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:36.847017    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:36.847032    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 16:21:36.877569    8599 node_ready.go:57] node "addons-557770" has "Ready":"False" status (will retry)
	I1019 16:21:36.940329    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:37.125890    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:37.347810    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:37.347835    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:37.439390    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:37.625172    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:37.846741    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:37.846847    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:37.940538    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:38.124207    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:38.347186    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:38.347250    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:38.439308    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:38.624925    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:38.847682    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:38.847812    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:38.940089    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:39.124938    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:39.347968    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:39.348101    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 16:21:39.377489    8599 node_ready.go:57] node "addons-557770" has "Ready":"False" status (will retry)
	I1019 16:21:39.440193    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:39.624804    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:39.847408    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:39.847579    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:39.939713    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:40.124199    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:40.347625    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:40.347754    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:40.440164    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:40.624628    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:40.847598    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:40.847601    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:40.931857    8599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:21:40.940055    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:41.125246    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:41.346824    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:41.346935    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:41.439995    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1019 16:21:41.476358    8599 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:21:41.476385    8599 retry.go:31] will retry after 5.864833014s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:21:41.625126    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:41.846932    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:41.847084    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 16:21:41.877415    8599 node_ready.go:57] node "addons-557770" has "Ready":"False" status (will retry)
	I1019 16:21:41.940255    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:42.124792    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:42.347692    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:42.347700    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:42.440045    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:42.624617    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:42.847322    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:42.847331    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:42.939803    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:43.124836    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:43.347494    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:43.347519    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:43.439937    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:43.624511    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:43.847033    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:43.847180    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 16:21:43.877472    8599 node_ready.go:57] node "addons-557770" has "Ready":"False" status (will retry)
	I1019 16:21:43.940285    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:44.124822    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:44.347761    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:44.347779    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:44.440183    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:44.624814    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:44.847812    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:44.847855    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:44.940237    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:45.125448    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:45.347635    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:45.347683    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:45.440250    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:45.624933    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:45.847737    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:45.847818    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:45.939831    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:46.124298    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:46.347099    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:46.347227    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1019 16:21:46.377696    8599 node_ready.go:57] node "addons-557770" has "Ready":"False" status (will retry)
	I1019 16:21:46.439489    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:46.624899    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:46.848165    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:46.848186    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:46.939708    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:47.124957    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:47.342309    8599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:21:47.347368    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:47.347418    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:47.439996    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:47.624630    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:47.847103    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:47.847227    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 16:21:47.874716    8599 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:21:47.874756    8599 retry.go:31] will retry after 13.58717238s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:21:47.940422    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:48.124937    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:48.347703    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:48.347801    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:48.439898    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:48.624709    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:48.847530    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:48.847573    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 16:21:48.878043    8599 node_ready.go:57] node "addons-557770" has "Ready":"False" status (will retry)
	I1019 16:21:48.939799    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:49.124298    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:49.346869    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:49.346885    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:49.440285    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:49.624931    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:49.847646    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:49.847828    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:49.940016    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:50.124699    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:50.347687    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:50.347694    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:50.440424    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:50.624857    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:50.847753    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:50.847872    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:50.940361    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:51.124839    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:51.349928    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:51.350094    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 16:21:51.377672    8599 node_ready.go:57] node "addons-557770" has "Ready":"False" status (will retry)
	I1019 16:21:51.439390    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:51.625045    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:51.847022    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:51.847143    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:51.939895    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:52.125023    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:52.347960    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:52.347953    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:52.439324    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:52.624988    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:52.847853    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:52.847940    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:52.940092    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:53.124601    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:53.347384    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:53.347455    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 16:21:53.378428    8599 node_ready.go:57] node "addons-557770" has "Ready":"False" status (will retry)
	I1019 16:21:53.440165    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:53.624628    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:53.847520    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:53.847573    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:53.940040    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:54.124624    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:54.347333    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:54.347440    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:54.439766    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:54.624418    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:54.847682    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:54.847710    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:54.940169    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:55.124908    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:55.347976    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:55.348000    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:55.440466    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:55.625125    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:55.846831    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:55.846947    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 16:21:55.877586    8599 node_ready.go:57] node "addons-557770" has "Ready":"False" status (will retry)
	I1019 16:21:55.939172    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:56.124758    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:56.347554    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:56.347583    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:56.440326    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:56.624885    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:56.847763    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:56.847795    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:56.940457    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:57.125168    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:57.346762    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:57.346867    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:57.440277    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:57.624897    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:57.847687    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:57.847692    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1019 16:21:57.878060    8599 node_ready.go:57] node "addons-557770" has "Ready":"False" status (will retry)
	I1019 16:21:57.939934    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:58.124597    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:58.347572    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:58.347608    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:58.439952    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:58.624617    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:58.847365    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:58.847450    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:58.939659    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:59.124267    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:59.347130    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:59.347187    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:59.439113    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:59.624112    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:59.846847    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:59.846918    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:59.940008    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:00.124419    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:00.347373    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:00.347387    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 16:22:00.377704    8599 node_ready.go:57] node "addons-557770" has "Ready":"False" status (will retry)
	I1019 16:22:00.439558    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:00.624095    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:00.846675    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:00.846685    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:00.939976    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:01.124552    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:01.347309    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:01.347426    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:01.439616    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:01.462721    8599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:22:01.624382    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:01.847773    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:01.847868    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:01.939963    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1019 16:22:02.001806    8599 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:22:02.001840    8599 retry.go:31] will retry after 11.85035315s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:22:02.124424    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:02.347204    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:02.347308    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 16:22:02.378028    8599 node_ready.go:57] node "addons-557770" has "Ready":"False" status (will retry)
	I1019 16:22:02.439905    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:02.624289    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:02.846955    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:02.847039    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:02.940498    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:03.128318    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:03.348632    8599 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1019 16:22:03.348657    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:03.348861    8599 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1019 16:22:03.348884    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:03.377889    8599 node_ready.go:49] node "addons-557770" is "Ready"
	I1019 16:22:03.377924    8599 node_ready.go:38] duration metric: took 41.003209654s for node "addons-557770" to be "Ready" ...
	I1019 16:22:03.377943    8599 api_server.go:52] waiting for apiserver process to appear ...
	I1019 16:22:03.377999    8599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 16:22:03.398273    8599 api_server.go:72] duration metric: took 41.659315703s to wait for apiserver process to appear ...
	I1019 16:22:03.398305    8599 api_server.go:88] waiting for apiserver healthz status ...
	I1019 16:22:03.398329    8599 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1019 16:22:03.404322    8599 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1019 16:22:03.405580    8599 api_server.go:141] control plane version: v1.34.1
	I1019 16:22:03.405615    8599 api_server.go:131] duration metric: took 7.30174ms to wait for apiserver health ...
	I1019 16:22:03.405626    8599 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 16:22:03.448952    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:03.450376    8599 system_pods.go:59] 20 kube-system pods found
	I1019 16:22:03.450487    8599 system_pods.go:61] "amd-gpu-device-plugin-66kws" [583f9bcd-aa6d-49aa-a883-8647ec131d3f] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1019 16:22:03.450577    8599 system_pods.go:61] "coredns-66bc5c9577-2p98v" [cbf64d34-66dc-4b0c-a26e-683f5a1493d0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 16:22:03.450603    8599 system_pods.go:61] "csi-hostpath-attacher-0" [0e47eaab-388b-48ea-b21a-d5358c786d55] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1019 16:22:03.450637    8599 system_pods.go:61] "csi-hostpath-resizer-0" [4bc94788-dde1-4e39-a836-7ee397bbfc20] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1019 16:22:03.450648    8599 system_pods.go:61] "csi-hostpathplugin-vvt5x" [0d9d010b-5e2d-4d3a-ade4-d3b5c6f3e597] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1019 16:22:03.450654    8599 system_pods.go:61] "etcd-addons-557770" [2a19f971-beeb-430b-9fb0-1bcbef816b18] Running
	I1019 16:22:03.450660    8599 system_pods.go:61] "kindnet-qbbdx" [6665252f-6f3c-437d-82ee-a664d8b2e0f9] Running
	I1019 16:22:03.450668    8599 system_pods.go:61] "kube-apiserver-addons-557770" [1821f9dd-e6bd-4635-8d51-11fcf09ee5ed] Running
	I1019 16:22:03.450674    8599 system_pods.go:61] "kube-controller-manager-addons-557770" [8dbfdee4-fcbb-48f3-abc4-9d0cab4764b5] Running
	I1019 16:22:03.450685    8599 system_pods.go:61] "kube-ingress-dns-minikube" [818774a0-0653-4521-93ca-ba3404f8c482] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1019 16:22:03.450691    8599 system_pods.go:61] "kube-proxy-zp9mk" [003abdd5-73da-456e-b519-34ed06ba8fa2] Running
	I1019 16:22:03.450698    8599 system_pods.go:61] "kube-scheduler-addons-557770" [3a398103-d56b-4dcd-87a0-cfe43844520e] Running
	I1019 16:22:03.450706    8599 system_pods.go:61] "metrics-server-85b7d694d7-6qb49" [ccf97c38-af56-4fdc-a1eb-238e1f9c98f7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1019 16:22:03.450717    8599 system_pods.go:61] "nvidia-device-plugin-daemonset-5d5sr" [b4d3ae84-fa02-4220-af1a-6d1eba3ff1a6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1019 16:22:03.450727    8599 system_pods.go:61] "registry-6b586f9694-fcnms" [85084e74-70aa-4ec6-a747-cd19730ff37b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1019 16:22:03.450738    8599 system_pods.go:61] "registry-creds-764b6fb674-9zcvj" [ae105e8b-c740-4d2e-8cbf-ac8ec523125c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1019 16:22:03.450746    8599 system_pods.go:61] "registry-proxy-cbqn4" [3d7d6881-00f4-45ae-aa7e-0d2b40fe10b2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1019 16:22:03.450763    8599 system_pods.go:61] "snapshot-controller-7d9fbc56b8-7w96c" [fc3aea78-7c62-4898-8a06-826e86881a70] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 16:22:03.450774    8599 system_pods.go:61] "snapshot-controller-7d9fbc56b8-g8g8j" [1c50c7f6-05c3-4444-a642-7d2cbd98fed7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 16:22:03.450783    8599 system_pods.go:61] "storage-provisioner" [1b036529-5685-4c48-b9df-a83ee5b242ea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 16:22:03.450795    8599 system_pods.go:74] duration metric: took 45.16243ms to wait for pod list to return data ...
	I1019 16:22:03.450808    8599 default_sa.go:34] waiting for default service account to be created ...
	I1019 16:22:03.455398    8599 default_sa.go:45] found service account: "default"
	I1019 16:22:03.455430    8599 default_sa.go:55] duration metric: took 4.610977ms for default service account to be created ...
	I1019 16:22:03.455442    8599 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 16:22:03.550425    8599 system_pods.go:86] 20 kube-system pods found
	I1019 16:22:03.550461    8599 system_pods.go:89] "amd-gpu-device-plugin-66kws" [583f9bcd-aa6d-49aa-a883-8647ec131d3f] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1019 16:22:03.550475    8599 system_pods.go:89] "coredns-66bc5c9577-2p98v" [cbf64d34-66dc-4b0c-a26e-683f5a1493d0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 16:22:03.550483    8599 system_pods.go:89] "csi-hostpath-attacher-0" [0e47eaab-388b-48ea-b21a-d5358c786d55] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1019 16:22:03.550488    8599 system_pods.go:89] "csi-hostpath-resizer-0" [4bc94788-dde1-4e39-a836-7ee397bbfc20] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1019 16:22:03.550494    8599 system_pods.go:89] "csi-hostpathplugin-vvt5x" [0d9d010b-5e2d-4d3a-ade4-d3b5c6f3e597] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1019 16:22:03.550498    8599 system_pods.go:89] "etcd-addons-557770" [2a19f971-beeb-430b-9fb0-1bcbef816b18] Running
	I1019 16:22:03.550502    8599 system_pods.go:89] "kindnet-qbbdx" [6665252f-6f3c-437d-82ee-a664d8b2e0f9] Running
	I1019 16:22:03.550505    8599 system_pods.go:89] "kube-apiserver-addons-557770" [1821f9dd-e6bd-4635-8d51-11fcf09ee5ed] Running
	I1019 16:22:03.550509    8599 system_pods.go:89] "kube-controller-manager-addons-557770" [8dbfdee4-fcbb-48f3-abc4-9d0cab4764b5] Running
	I1019 16:22:03.550514    8599 system_pods.go:89] "kube-ingress-dns-minikube" [818774a0-0653-4521-93ca-ba3404f8c482] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1019 16:22:03.550517    8599 system_pods.go:89] "kube-proxy-zp9mk" [003abdd5-73da-456e-b519-34ed06ba8fa2] Running
	I1019 16:22:03.550522    8599 system_pods.go:89] "kube-scheduler-addons-557770" [3a398103-d56b-4dcd-87a0-cfe43844520e] Running
	I1019 16:22:03.550527    8599 system_pods.go:89] "metrics-server-85b7d694d7-6qb49" [ccf97c38-af56-4fdc-a1eb-238e1f9c98f7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1019 16:22:03.550541    8599 system_pods.go:89] "nvidia-device-plugin-daemonset-5d5sr" [b4d3ae84-fa02-4220-af1a-6d1eba3ff1a6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1019 16:22:03.550546    8599 system_pods.go:89] "registry-6b586f9694-fcnms" [85084e74-70aa-4ec6-a747-cd19730ff37b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1019 16:22:03.550553    8599 system_pods.go:89] "registry-creds-764b6fb674-9zcvj" [ae105e8b-c740-4d2e-8cbf-ac8ec523125c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1019 16:22:03.550558    8599 system_pods.go:89] "registry-proxy-cbqn4" [3d7d6881-00f4-45ae-aa7e-0d2b40fe10b2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1019 16:22:03.550564    8599 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7w96c" [fc3aea78-7c62-4898-8a06-826e86881a70] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 16:22:03.550570    8599 system_pods.go:89] "snapshot-controller-7d9fbc56b8-g8g8j" [1c50c7f6-05c3-4444-a642-7d2cbd98fed7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 16:22:03.550577    8599 system_pods.go:89] "storage-provisioner" [1b036529-5685-4c48-b9df-a83ee5b242ea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 16:22:03.550590    8599 retry.go:31] will retry after 191.402617ms: missing components: kube-dns
	I1019 16:22:03.624398    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:03.746462    8599 system_pods.go:86] 20 kube-system pods found
	I1019 16:22:03.746521    8599 system_pods.go:89] "amd-gpu-device-plugin-66kws" [583f9bcd-aa6d-49aa-a883-8647ec131d3f] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1019 16:22:03.746529    8599 system_pods.go:89] "coredns-66bc5c9577-2p98v" [cbf64d34-66dc-4b0c-a26e-683f5a1493d0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 16:22:03.746538    8599 system_pods.go:89] "csi-hostpath-attacher-0" [0e47eaab-388b-48ea-b21a-d5358c786d55] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1019 16:22:03.746547    8599 system_pods.go:89] "csi-hostpath-resizer-0" [4bc94788-dde1-4e39-a836-7ee397bbfc20] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1019 16:22:03.746553    8599 system_pods.go:89] "csi-hostpathplugin-vvt5x" [0d9d010b-5e2d-4d3a-ade4-d3b5c6f3e597] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1019 16:22:03.746558    8599 system_pods.go:89] "etcd-addons-557770" [2a19f971-beeb-430b-9fb0-1bcbef816b18] Running
	I1019 16:22:03.746562    8599 system_pods.go:89] "kindnet-qbbdx" [6665252f-6f3c-437d-82ee-a664d8b2e0f9] Running
	I1019 16:22:03.746566    8599 system_pods.go:89] "kube-apiserver-addons-557770" [1821f9dd-e6bd-4635-8d51-11fcf09ee5ed] Running
	I1019 16:22:03.746569    8599 system_pods.go:89] "kube-controller-manager-addons-557770" [8dbfdee4-fcbb-48f3-abc4-9d0cab4764b5] Running
	I1019 16:22:03.746577    8599 system_pods.go:89] "kube-ingress-dns-minikube" [818774a0-0653-4521-93ca-ba3404f8c482] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1019 16:22:03.746582    8599 system_pods.go:89] "kube-proxy-zp9mk" [003abdd5-73da-456e-b519-34ed06ba8fa2] Running
	I1019 16:22:03.746587    8599 system_pods.go:89] "kube-scheduler-addons-557770" [3a398103-d56b-4dcd-87a0-cfe43844520e] Running
	I1019 16:22:03.746594    8599 system_pods.go:89] "metrics-server-85b7d694d7-6qb49" [ccf97c38-af56-4fdc-a1eb-238e1f9c98f7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1019 16:22:03.746600    8599 system_pods.go:89] "nvidia-device-plugin-daemonset-5d5sr" [b4d3ae84-fa02-4220-af1a-6d1eba3ff1a6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1019 16:22:03.746605    8599 system_pods.go:89] "registry-6b586f9694-fcnms" [85084e74-70aa-4ec6-a747-cd19730ff37b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1019 16:22:03.746611    8599 system_pods.go:89] "registry-creds-764b6fb674-9zcvj" [ae105e8b-c740-4d2e-8cbf-ac8ec523125c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1019 16:22:03.746616    8599 system_pods.go:89] "registry-proxy-cbqn4" [3d7d6881-00f4-45ae-aa7e-0d2b40fe10b2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1019 16:22:03.746622    8599 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7w96c" [fc3aea78-7c62-4898-8a06-826e86881a70] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 16:22:03.746637    8599 system_pods.go:89] "snapshot-controller-7d9fbc56b8-g8g8j" [1c50c7f6-05c3-4444-a642-7d2cbd98fed7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 16:22:03.746646    8599 system_pods.go:89] "storage-provisioner" [1b036529-5685-4c48-b9df-a83ee5b242ea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 16:22:03.746660    8599 retry.go:31] will retry after 343.891877ms: missing components: kube-dns
	I1019 16:22:03.848132    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:03.848223    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:03.940137    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:04.095533    8599 system_pods.go:86] 20 kube-system pods found
	I1019 16:22:04.095567    8599 system_pods.go:89] "amd-gpu-device-plugin-66kws" [583f9bcd-aa6d-49aa-a883-8647ec131d3f] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1019 16:22:04.095575    8599 system_pods.go:89] "coredns-66bc5c9577-2p98v" [cbf64d34-66dc-4b0c-a26e-683f5a1493d0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 16:22:04.095582    8599 system_pods.go:89] "csi-hostpath-attacher-0" [0e47eaab-388b-48ea-b21a-d5358c786d55] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1019 16:22:04.095589    8599 system_pods.go:89] "csi-hostpath-resizer-0" [4bc94788-dde1-4e39-a836-7ee397bbfc20] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1019 16:22:04.095595    8599 system_pods.go:89] "csi-hostpathplugin-vvt5x" [0d9d010b-5e2d-4d3a-ade4-d3b5c6f3e597] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1019 16:22:04.095599    8599 system_pods.go:89] "etcd-addons-557770" [2a19f971-beeb-430b-9fb0-1bcbef816b18] Running
	I1019 16:22:04.095603    8599 system_pods.go:89] "kindnet-qbbdx" [6665252f-6f3c-437d-82ee-a664d8b2e0f9] Running
	I1019 16:22:04.095607    8599 system_pods.go:89] "kube-apiserver-addons-557770" [1821f9dd-e6bd-4635-8d51-11fcf09ee5ed] Running
	I1019 16:22:04.095610    8599 system_pods.go:89] "kube-controller-manager-addons-557770" [8dbfdee4-fcbb-48f3-abc4-9d0cab4764b5] Running
	I1019 16:22:04.095615    8599 system_pods.go:89] "kube-ingress-dns-minikube" [818774a0-0653-4521-93ca-ba3404f8c482] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1019 16:22:04.095618    8599 system_pods.go:89] "kube-proxy-zp9mk" [003abdd5-73da-456e-b519-34ed06ba8fa2] Running
	I1019 16:22:04.095621    8599 system_pods.go:89] "kube-scheduler-addons-557770" [3a398103-d56b-4dcd-87a0-cfe43844520e] Running
	I1019 16:22:04.095626    8599 system_pods.go:89] "metrics-server-85b7d694d7-6qb49" [ccf97c38-af56-4fdc-a1eb-238e1f9c98f7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1019 16:22:04.095638    8599 system_pods.go:89] "nvidia-device-plugin-daemonset-5d5sr" [b4d3ae84-fa02-4220-af1a-6d1eba3ff1a6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1019 16:22:04.095645    8599 system_pods.go:89] "registry-6b586f9694-fcnms" [85084e74-70aa-4ec6-a747-cd19730ff37b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1019 16:22:04.095652    8599 system_pods.go:89] "registry-creds-764b6fb674-9zcvj" [ae105e8b-c740-4d2e-8cbf-ac8ec523125c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1019 16:22:04.095657    8599 system_pods.go:89] "registry-proxy-cbqn4" [3d7d6881-00f4-45ae-aa7e-0d2b40fe10b2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1019 16:22:04.095661    8599 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7w96c" [fc3aea78-7c62-4898-8a06-826e86881a70] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 16:22:04.095666    8599 system_pods.go:89] "snapshot-controller-7d9fbc56b8-g8g8j" [1c50c7f6-05c3-4444-a642-7d2cbd98fed7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 16:22:04.095678    8599 system_pods.go:89] "storage-provisioner" [1b036529-5685-4c48-b9df-a83ee5b242ea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 16:22:04.095693    8599 retry.go:31] will retry after 396.766042ms: missing components: kube-dns
	I1019 16:22:04.125279    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:04.350006    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:04.351570    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:04.441698    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:04.498832    8599 system_pods.go:86] 20 kube-system pods found
	I1019 16:22:04.498873    8599 system_pods.go:89] "amd-gpu-device-plugin-66kws" [583f9bcd-aa6d-49aa-a883-8647ec131d3f] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1019 16:22:04.498882    8599 system_pods.go:89] "coredns-66bc5c9577-2p98v" [cbf64d34-66dc-4b0c-a26e-683f5a1493d0] Running
	I1019 16:22:04.498894    8599 system_pods.go:89] "csi-hostpath-attacher-0" [0e47eaab-388b-48ea-b21a-d5358c786d55] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1019 16:22:04.498902    8599 system_pods.go:89] "csi-hostpath-resizer-0" [4bc94788-dde1-4e39-a836-7ee397bbfc20] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1019 16:22:04.498911    8599 system_pods.go:89] "csi-hostpathplugin-vvt5x" [0d9d010b-5e2d-4d3a-ade4-d3b5c6f3e597] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1019 16:22:04.498926    8599 system_pods.go:89] "etcd-addons-557770" [2a19f971-beeb-430b-9fb0-1bcbef816b18] Running
	I1019 16:22:04.498933    8599 system_pods.go:89] "kindnet-qbbdx" [6665252f-6f3c-437d-82ee-a664d8b2e0f9] Running
	I1019 16:22:04.498941    8599 system_pods.go:89] "kube-apiserver-addons-557770" [1821f9dd-e6bd-4635-8d51-11fcf09ee5ed] Running
	I1019 16:22:04.498947    8599 system_pods.go:89] "kube-controller-manager-addons-557770" [8dbfdee4-fcbb-48f3-abc4-9d0cab4764b5] Running
	I1019 16:22:04.498956    8599 system_pods.go:89] "kube-ingress-dns-minikube" [818774a0-0653-4521-93ca-ba3404f8c482] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1019 16:22:04.498963    8599 system_pods.go:89] "kube-proxy-zp9mk" [003abdd5-73da-456e-b519-34ed06ba8fa2] Running
	I1019 16:22:04.498969    8599 system_pods.go:89] "kube-scheduler-addons-557770" [3a398103-d56b-4dcd-87a0-cfe43844520e] Running
	I1019 16:22:04.498978    8599 system_pods.go:89] "metrics-server-85b7d694d7-6qb49" [ccf97c38-af56-4fdc-a1eb-238e1f9c98f7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1019 16:22:04.498986    8599 system_pods.go:89] "nvidia-device-plugin-daemonset-5d5sr" [b4d3ae84-fa02-4220-af1a-6d1eba3ff1a6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1019 16:22:04.498994    8599 system_pods.go:89] "registry-6b586f9694-fcnms" [85084e74-70aa-4ec6-a747-cd19730ff37b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1019 16:22:04.499002    8599 system_pods.go:89] "registry-creds-764b6fb674-9zcvj" [ae105e8b-c740-4d2e-8cbf-ac8ec523125c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1019 16:22:04.499011    8599 system_pods.go:89] "registry-proxy-cbqn4" [3d7d6881-00f4-45ae-aa7e-0d2b40fe10b2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1019 16:22:04.499019    8599 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7w96c" [fc3aea78-7c62-4898-8a06-826e86881a70] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 16:22:04.499029    8599 system_pods.go:89] "snapshot-controller-7d9fbc56b8-g8g8j" [1c50c7f6-05c3-4444-a642-7d2cbd98fed7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 16:22:04.499034    8599 system_pods.go:89] "storage-provisioner" [1b036529-5685-4c48-b9df-a83ee5b242ea] Running
	I1019 16:22:04.499045    8599 system_pods.go:126] duration metric: took 1.043595241s to wait for k8s-apps to be running ...
	I1019 16:22:04.499055    8599 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 16:22:04.499129    8599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 16:22:04.518335    8599 system_svc.go:56] duration metric: took 19.271494ms WaitForService to wait for kubelet
	I1019 16:22:04.518365    8599 kubeadm.go:587] duration metric: took 42.779411932s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 16:22:04.518397    8599 node_conditions.go:102] verifying NodePressure condition ...
	I1019 16:22:04.522260    8599 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1019 16:22:04.522285    8599 node_conditions.go:123] node cpu capacity is 8
	I1019 16:22:04.522298    8599 node_conditions.go:105] duration metric: took 3.895909ms to run NodePressure ...
	I1019 16:22:04.522310    8599 start.go:242] waiting for startup goroutines ...
	I1019 16:22:04.625348    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:04.848190    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:04.848406    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:04.940046    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:05.125670    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:05.348240    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:05.348400    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:05.440332    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:05.625544    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:05.847809    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:05.847932    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:05.940033    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:06.125036    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:06.348294    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:06.348435    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:06.440523    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:06.625140    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:06.848864    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:06.848925    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:06.940185    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:07.126049    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:07.348516    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:07.348660    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:07.440501    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:07.625307    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:07.847476    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:07.847650    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:07.940365    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:08.125060    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:08.348418    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:08.348439    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:08.449180    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:08.625261    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:08.847368    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:08.847415    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:08.940555    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:09.125915    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:09.349236    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:09.350532    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:09.441137    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:09.625808    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:09.848294    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:09.848351    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:09.940363    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:10.125125    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:10.348470    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:10.348657    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:10.442189    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:10.625143    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:10.887630    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:10.887767    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:11.023610    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:11.125034    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:11.348233    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:11.348357    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:11.440185    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:11.625735    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:11.848157    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:11.848217    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:11.940104    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:12.125024    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:12.348592    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:12.348649    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:12.449700    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:12.624533    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:12.847405    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:12.847534    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:12.940117    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:13.125164    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:13.351243    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:13.351411    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:13.440746    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:13.624688    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:13.848158    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:13.848238    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:13.853284    8599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:22:13.939444    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:14.124879    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:14.348159    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:14.348181    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1019 16:22:14.421746    8599 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:22:14.421785    8599 retry.go:31] will retry after 29.079297243s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:22:14.449095    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:14.624785    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:14.847767    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:14.847939    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:14.940274    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:15.125147    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:15.348401    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:15.348489    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:15.440541    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:15.625238    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:15.847714    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:15.847911    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:15.940528    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:16.124407    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:16.347958    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:16.348110    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:16.440165    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:16.625040    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:16.848280    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:16.848425    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:16.940761    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:17.124881    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:17.348420    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:17.348637    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:17.440748    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:17.625692    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:17.847960    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:17.848012    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:17.939905    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:18.125719    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:18.348225    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:18.348412    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:18.440119    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:18.625460    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:18.847851    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:18.847876    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:18.948682    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:19.124585    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:19.348054    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:19.348187    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:19.448842    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:19.624679    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:19.847739    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:19.847974    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:19.939736    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:20.124858    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:20.425711    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:20.425778    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:20.484486    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:20.624852    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:20.847971    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:20.847981    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:20.940562    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:21.124356    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:21.348562    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:21.350599    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:21.442816    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:21.625245    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:21.847498    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:21.847620    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:21.940434    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:22.124336    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:22.347057    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:22.349597    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:22.439293    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:22.625298    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:22.847436    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:22.847572    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:22.940618    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:23.125508    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:23.347325    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:23.347566    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:23.439709    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:23.624457    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:23.847635    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:23.847692    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:23.940346    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:24.125089    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:24.349390    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:24.349694    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:24.441135    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:24.628113    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:24.855392    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:24.856286    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:24.940781    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:25.125096    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:25.366818    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:25.367576    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:25.481350    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:25.637112    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:25.848824    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:25.848959    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:25.940338    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:26.127830    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:26.349024    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:26.349163    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:26.440690    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:26.625568    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:26.848574    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:26.848635    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:26.940687    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:27.125106    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:27.438476    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:27.438718    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:27.440751    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:27.624687    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:27.848038    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:27.848193    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:27.940383    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:28.125573    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:28.347754    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:28.348477    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:28.439974    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:28.624869    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:28.848413    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:28.848586    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:28.940690    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:29.125413    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:29.472352    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:29.472426    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:29.472657    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:29.626180    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:29.848648    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:29.848647    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:29.940947    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:30.124842    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:30.348284    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:30.348541    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:30.440266    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:30.625233    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:30.847476    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:30.847612    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:30.948627    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:31.124869    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:31.348279    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:31.348444    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:31.440943    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:31.625542    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:32.030809    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:32.030818    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:32.030864    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:32.125137    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:32.348227    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:32.348402    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:32.448909    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:32.625491    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:32.848505    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:32.848804    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:32.940449    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:33.125238    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:33.348100    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:33.348306    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:33.440405    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:33.625704    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:33.849208    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:33.849305    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:33.939874    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:34.124898    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:34.348651    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:34.349191    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:34.441668    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:34.625417    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:34.847480    8599 kapi.go:107] duration metric: took 1m11.003299175s to wait for kubernetes.io/minikube-addons=registry ...
	I1019 16:22:34.847626    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:34.940673    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:35.124831    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:35.348442    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:35.544035    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:35.684217    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:35.848919    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:35.941932    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:36.125843    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:36.348040    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:36.440036    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:36.625174    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:36.848318    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:36.948909    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:37.127344    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:37.348894    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:37.442024    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:37.625865    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:37.848912    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:37.940386    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:38.124686    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:38.348098    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:38.441554    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:38.625115    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:38.848407    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:38.940192    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:39.125231    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:39.347759    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:39.440899    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:39.624768    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:39.848309    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:39.940276    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:40.125247    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:40.348373    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:40.440529    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:40.624927    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:40.848175    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:40.949315    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:41.126260    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:41.347676    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:41.443117    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:41.625187    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:41.847543    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:42.027801    8599 kapi.go:107] duration metric: took 1m18.591278832s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1019 16:22:42.162953    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:42.348326    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:42.625154    8599 kapi.go:107] duration metric: took 1m12.503666122s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1019 16:22:42.627111    8599 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-557770 cluster.
	I1019 16:22:42.628733    8599 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1019 16:22:42.630380    8599 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1019 16:22:42.847943    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:43.347488    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:43.501595    8599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:22:43.848284    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 16:22:44.139738    8599 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1019 16:22:44.139894    8599 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1019 16:22:44.348439    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:44.848354    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:45.348840    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:45.848031    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:46.347943    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:46.847622    8599 kapi.go:107] duration metric: took 1m23.003499672s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1019 16:22:46.849665    8599 out.go:179] * Enabled addons: cloud-spanner, storage-provisioner, registry-creds, amd-gpu-device-plugin, default-storageclass, nvidia-device-plugin, storage-provisioner-rancher, ingress-dns, metrics-server, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1019 16:22:46.851340    8599 addons.go:515] duration metric: took 1m25.112321573s for enable addons: enabled=[cloud-spanner storage-provisioner registry-creds amd-gpu-device-plugin default-storageclass nvidia-device-plugin storage-provisioner-rancher ingress-dns metrics-server yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1019 16:22:46.851384    8599 start.go:247] waiting for cluster config update ...
	I1019 16:22:46.851412    8599 start.go:256] writing updated cluster config ...
	I1019 16:22:46.851709    8599 ssh_runner.go:195] Run: rm -f paused
	I1019 16:22:46.855748    8599 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 16:22:46.858986    8599 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-2p98v" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:22:46.863498    8599 pod_ready.go:94] pod "coredns-66bc5c9577-2p98v" is "Ready"
	I1019 16:22:46.863528    8599 pod_ready.go:86] duration metric: took 4.517912ms for pod "coredns-66bc5c9577-2p98v" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:22:46.865585    8599 pod_ready.go:83] waiting for pod "etcd-addons-557770" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:22:46.869554    8599 pod_ready.go:94] pod "etcd-addons-557770" is "Ready"
	I1019 16:22:46.869588    8599 pod_ready.go:86] duration metric: took 3.980467ms for pod "etcd-addons-557770" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:22:46.871418    8599 pod_ready.go:83] waiting for pod "kube-apiserver-addons-557770" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:22:46.875183    8599 pod_ready.go:94] pod "kube-apiserver-addons-557770" is "Ready"
	I1019 16:22:46.875214    8599 pod_ready.go:86] duration metric: took 3.774406ms for pod "kube-apiserver-addons-557770" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:22:46.878840    8599 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-557770" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:22:47.260092    8599 pod_ready.go:94] pod "kube-controller-manager-addons-557770" is "Ready"
	I1019 16:22:47.260118    8599 pod_ready.go:86] duration metric: took 381.247465ms for pod "kube-controller-manager-addons-557770" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:22:47.460559    8599 pod_ready.go:83] waiting for pod "kube-proxy-zp9mk" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:22:47.886740    8599 pod_ready.go:94] pod "kube-proxy-zp9mk" is "Ready"
	I1019 16:22:47.886776    8599 pod_ready.go:86] duration metric: took 426.185807ms for pod "kube-proxy-zp9mk" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:22:48.143494    8599 pod_ready.go:83] waiting for pod "kube-scheduler-addons-557770" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:22:48.460162    8599 pod_ready.go:94] pod "kube-scheduler-addons-557770" is "Ready"
	I1019 16:22:48.460201    8599 pod_ready.go:86] duration metric: took 316.676281ms for pod "kube-scheduler-addons-557770" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:22:48.460218    8599 pod_ready.go:40] duration metric: took 1.604432361s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 16:22:48.508637    8599 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1019 16:22:48.510258    8599 out.go:179] * Done! kubectl is now configured to use "addons-557770" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 19 16:25:31 addons-557770 crio[772]: time="2025-10-19T16:25:31.337945949Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-98d9j/POD" id=38ecd073-a739-428c-9ab2-b6b13d84d10c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 16:25:31 addons-557770 crio[772]: time="2025-10-19T16:25:31.338060112Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 16:25:31 addons-557770 crio[772]: time="2025-10-19T16:25:31.345507171Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-98d9j Namespace:default ID:3836064af583bfb2a10e57f72f55f039054a84a90b8e8ae9cb812bbff7e35420 UID:9cde65d3-ea27-4111-9784-fa379c7cd11a NetNS:/var/run/netns/8fb2553a-775a-4929-8d16-5f9734103e04 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000d16d68}] Aliases:map[]}"
	Oct 19 16:25:31 addons-557770 crio[772]: time="2025-10-19T16:25:31.345545315Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-98d9j to CNI network \"kindnet\" (type=ptp)"
	Oct 19 16:25:31 addons-557770 crio[772]: time="2025-10-19T16:25:31.355645287Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-98d9j Namespace:default ID:3836064af583bfb2a10e57f72f55f039054a84a90b8e8ae9cb812bbff7e35420 UID:9cde65d3-ea27-4111-9784-fa379c7cd11a NetNS:/var/run/netns/8fb2553a-775a-4929-8d16-5f9734103e04 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000d16d68}] Aliases:map[]}"
	Oct 19 16:25:31 addons-557770 crio[772]: time="2025-10-19T16:25:31.355799547Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-98d9j for CNI network kindnet (type=ptp)"
	Oct 19 16:25:31 addons-557770 crio[772]: time="2025-10-19T16:25:31.356773793Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 19 16:25:31 addons-557770 crio[772]: time="2025-10-19T16:25:31.35761226Z" level=info msg="Ran pod sandbox 3836064af583bfb2a10e57f72f55f039054a84a90b8e8ae9cb812bbff7e35420 with infra container: default/hello-world-app-5d498dc89-98d9j/POD" id=38ecd073-a739-428c-9ab2-b6b13d84d10c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 16:25:31 addons-557770 crio[772]: time="2025-10-19T16:25:31.358977265Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=a96b7790-eead-447e-acf3-8f42a39eba9b name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:25:31 addons-557770 crio[772]: time="2025-10-19T16:25:31.359150823Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=a96b7790-eead-447e-acf3-8f42a39eba9b name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:25:31 addons-557770 crio[772]: time="2025-10-19T16:25:31.359195342Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=a96b7790-eead-447e-acf3-8f42a39eba9b name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:25:31 addons-557770 crio[772]: time="2025-10-19T16:25:31.359946432Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=73eb497b-9b77-43e3-ae4a-7ad6d6d24698 name=/runtime.v1.ImageService/PullImage
	Oct 19 16:25:31 addons-557770 crio[772]: time="2025-10-19T16:25:31.364747256Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Oct 19 16:25:32 addons-557770 crio[772]: time="2025-10-19T16:25:32.186207848Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" id=73eb497b-9b77-43e3-ae4a-7ad6d6d24698 name=/runtime.v1.ImageService/PullImage
	Oct 19 16:25:32 addons-557770 crio[772]: time="2025-10-19T16:25:32.186848088Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=3be5d647-f622-4cdc-8a81-d4fd07703491 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:25:32 addons-557770 crio[772]: time="2025-10-19T16:25:32.188501671Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=cda30fbe-e1d9-4e15-b9b8-464599899085 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:25:32 addons-557770 crio[772]: time="2025-10-19T16:25:32.192681312Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-98d9j/hello-world-app" id=b7158b38-1437-461e-8ecd-ab1b28cfb197 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 16:25:32 addons-557770 crio[772]: time="2025-10-19T16:25:32.193331407Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 16:25:32 addons-557770 crio[772]: time="2025-10-19T16:25:32.199462323Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 16:25:32 addons-557770 crio[772]: time="2025-10-19T16:25:32.199627681Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/d033f9231a814a6efef28db9eaaeee4ad69f977609a225b9389ead3ba35a27c3/merged/etc/passwd: no such file or directory"
	Oct 19 16:25:32 addons-557770 crio[772]: time="2025-10-19T16:25:32.199657275Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/d033f9231a814a6efef28db9eaaeee4ad69f977609a225b9389ead3ba35a27c3/merged/etc/group: no such file or directory"
	Oct 19 16:25:32 addons-557770 crio[772]: time="2025-10-19T16:25:32.199940257Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 16:25:32 addons-557770 crio[772]: time="2025-10-19T16:25:32.239486521Z" level=info msg="Created container 0ce3022afd347993f8f1800edbafade96ec77370edc6b476cd2a67f76284af56: default/hello-world-app-5d498dc89-98d9j/hello-world-app" id=b7158b38-1437-461e-8ecd-ab1b28cfb197 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 16:25:32 addons-557770 crio[772]: time="2025-10-19T16:25:32.240180252Z" level=info msg="Starting container: 0ce3022afd347993f8f1800edbafade96ec77370edc6b476cd2a67f76284af56" id=fb5b14b7-0f4f-4c82-8519-ca51851e85e5 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 16:25:32 addons-557770 crio[772]: time="2025-10-19T16:25:32.242421945Z" level=info msg="Started container" PID=9834 containerID=0ce3022afd347993f8f1800edbafade96ec77370edc6b476cd2a67f76284af56 description=default/hello-world-app-5d498dc89-98d9j/hello-world-app id=fb5b14b7-0f4f-4c82-8519-ca51851e85e5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3836064af583bfb2a10e57f72f55f039054a84a90b8e8ae9cb812bbff7e35420
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	0ce3022afd347       docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86                                        Less than a second ago   Running             hello-world-app                          0                   3836064af583b       hello-world-app-5d498dc89-98d9j             default
	088d1217f9c6a       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             About a minute ago       Running             registry-creds                           0                   8ca9bab8d2059       registry-creds-764b6fb674-9zcvj             kube-system
	0556b188bc115       docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e                                              2 minutes ago            Running             nginx                                    0                   57276255444ba       nginx                                       default
	517678a62c8c3       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago            Running             busybox                                  0                   b143e15b70687       busybox                                     default
	9d790d16e8959       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          2 minutes ago            Running             csi-snapshotter                          0                   e3476d9211540       csi-hostpathplugin-vvt5x                    kube-system
	5888ac56628ff       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          2 minutes ago            Running             csi-provisioner                          0                   e3476d9211540       csi-hostpathplugin-vvt5x                    kube-system
	50a236ad43b3d       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            2 minutes ago            Running             liveness-probe                           0                   e3476d9211540       csi-hostpathplugin-vvt5x                    kube-system
	05400eb2fd5eb       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           2 minutes ago            Running             hostpath                                 0                   e3476d9211540       csi-hostpathplugin-vvt5x                    kube-system
	efa983b0c0938       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 2 minutes ago            Running             gcp-auth                                 0                   b1502643499dd       gcp-auth-78565c9fb4-d8qwj                   gcp-auth
	000220bd236a5       registry.k8s.io/ingress-nginx/controller@sha256:7b4073fc95e078d863c0b0b08deb72e01d2cf629e2156822bcd394fc2bcd8e83                             2 minutes ago            Running             controller                               0                   e241852a6cbd0       ingress-nginx-controller-675c5ddd98-jcfrv   ingress-nginx
	2e93d36349d27       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                2 minutes ago            Running             node-driver-registrar                    0                   e3476d9211540       csi-hostpathplugin-vvt5x                    kube-system
	943c05844222c       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb                            2 minutes ago            Running             gadget                                   0                   95f17bddfabb1       gadget-jpd5t                                gadget
	c503d7edb96f3       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              2 minutes ago            Running             registry-proxy                           0                   10bfc35e635b9       registry-proxy-cbqn4                        kube-system
	bfa49cfea4019       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   3 minutes ago            Running             csi-external-health-monitor-controller   0                   e3476d9211540       csi-hostpathplugin-vvt5x                    kube-system
	aeccc8f632779       nvcr.io/nvidia/k8s-device-plugin@sha256:ad155f1089b64673c75b2f39258f0791cbad6d3011419726ec605196981e1c32                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   52909df1b2d0b       nvidia-device-plugin-daemonset-5d5sr        kube-system
	d3273937d7efc       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago            Running             csi-resizer                              0                   42416c16b54d5       csi-hostpath-resizer-0                      kube-system
	6ef8a71fe5d39       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago            Running             volume-snapshot-controller               0                   0018a5b9d0d6c       snapshot-controller-7d9fbc56b8-g8g8j        kube-system
	6e2091eec84de       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     3 minutes ago            Running             amd-gpu-device-plugin                    0                   aac0d45065796       amd-gpu-device-plugin-66kws                 kube-system
	037c0332147ce       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago            Running             volume-snapshot-controller               0                   0eae7619bffee       snapshot-controller-7d9fbc56b8-7w96c        kube-system
	ab1a761dc93fc       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   3 minutes ago            Exited              patch                                    0                   14f9ac01bfee1       ingress-nginx-admission-patch-kb26q         ingress-nginx
	243f77f2d4cc6       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   3 minutes ago            Exited              create                                   0                   c95dabd6611a9       ingress-nginx-admission-create-7tns9        ingress-nginx
	65863d6eb03db       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago            Running             csi-attacher                             0                   d9d602515b856       csi-hostpath-attacher-0                     kube-system
	d904f44568c85       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              3 minutes ago            Running             yakd                                     0                   191e664d32140       yakd-dashboard-5ff678cb9-g4ltn              yakd-dashboard
	28fc6fff59e06       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago            Running             local-path-provisioner                   0                   5db9d88acd8c1       local-path-provisioner-648f6765c9-gsfrf     local-path-storage
	28dca4a98797d       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           3 minutes ago            Running             registry                                 0                   66519a6c74604       registry-6b586f9694-fcnms                   kube-system
	893241a14d701       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        3 minutes ago            Running             metrics-server                           0                   4ba47595d0d03       metrics-server-85b7d694d7-6qb49             kube-system
	091ae9a183d46       gcr.io/cloud-spanner-emulator/emulator@sha256:66030f526b1bc41f0d2027b496fd8fa53f620bf9d5a18baa07990e67f1a20237                               3 minutes ago            Running             cloud-spanner-emulator                   0                   485c819e3059e       cloud-spanner-emulator-86bd5cbb97-w5gdv     default
	b929bd44832f6       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago            Running             minikube-ingress-dns                     0                   174e82481266f       kube-ingress-dns-minikube                   kube-system
	ef3b3e7a48948       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             3 minutes ago            Running             coredns                                  0                   d89302dfd9c10       coredns-66bc5c9577-2p98v                    kube-system
	8d61047c5353c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             3 minutes ago            Running             storage-provisioner                      0                   120ad84aa01c8       storage-provisioner                         kube-system
	91694d16fb2b0       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             4 minutes ago            Running             kube-proxy                               0                   df4eef4219e62       kube-proxy-zp9mk                            kube-system
	00766cde13982       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             4 minutes ago            Running             kindnet-cni                              0                   da6ada4832f1e       kindnet-qbbdx                               kube-system
	49e88f9620ecb       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             4 minutes ago            Running             etcd                                     0                   0b4fd5fb5b1fa       etcd-addons-557770                          kube-system
	64ffa4d775be3       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             4 minutes ago            Running             kube-scheduler                           0                   4947e309d55fc       kube-scheduler-addons-557770                kube-system
	7ea12626c6ada       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             4 minutes ago            Running             kube-apiserver                           0                   237780f5ae90b       kube-apiserver-addons-557770                kube-system
	75207afa634f8       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             4 minutes ago            Running             kube-controller-manager                  0                   0c2ca08606766       kube-controller-manager-addons-557770       kube-system
	
	
	==> coredns [ef3b3e7a4894884f7c8bbd755c56a7700dc943a7928aa0e52f4dbdc81806fd1b] <==
	[INFO] 10.244.0.22:49148 - 923 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.00639971s
	[INFO] 10.244.0.22:57439 - 64337 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005384593s
	[INFO] 10.244.0.22:48502 - 18987 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005582581s
	[INFO] 10.244.0.22:34854 - 63633 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005324608s
	[INFO] 10.244.0.22:36136 - 60755 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005668887s
	[INFO] 10.244.0.22:49890 - 34889 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001155391s
	[INFO] 10.244.0.22:43808 - 52476 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002546802s
	[INFO] 10.244.0.27:60772 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000254078s
	[INFO] 10.244.0.27:50994 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00014981s
	[INFO] 10.244.0.31:38852 - 10398 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000216558s
	[INFO] 10.244.0.31:59489 - 445 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000294312s
	[INFO] 10.244.0.31:58893 - 37059 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000140364s
	[INFO] 10.244.0.31:38040 - 19772 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000188591s
	[INFO] 10.244.0.31:60878 - 37403 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000126242s
	[INFO] 10.244.0.31:33211 - 9576 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000197328s
	[INFO] 10.244.0.31:40678 - 60321 "A IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.003325491s
	[INFO] 10.244.0.31:60713 - 2465 "AAAA IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.003451235s
	[INFO] 10.244.0.31:60868 - 23271 "A IN accounts.google.com.us-central1-a.c.k8s-minikube.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 185 0.004513645s
	[INFO] 10.244.0.31:49110 - 58760 "AAAA IN accounts.google.com.us-central1-a.c.k8s-minikube.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 185 0.005740248s
	[INFO] 10.244.0.31:37467 - 38215 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.005023103s
	[INFO] 10.244.0.31:53728 - 40471 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.006366772s
	[INFO] 10.244.0.31:53270 - 38076 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.00458111s
	[INFO] 10.244.0.31:44722 - 11362 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.004754994s
	[INFO] 10.244.0.31:40469 - 50342 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.001864285s
	[INFO] 10.244.0.31:52737 - 11736 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.001950091s
	
	
	==> describe nodes <==
	Name:               addons-557770
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-557770
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34
	                    minikube.k8s.io/name=addons-557770
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T16_21_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-557770
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-557770"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 16:21:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-557770
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 16:25:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 16:24:51 +0000   Sun, 19 Oct 2025 16:21:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 16:24:51 +0000   Sun, 19 Oct 2025 16:21:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 16:24:51 +0000   Sun, 19 Oct 2025 16:21:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 16:24:51 +0000   Sun, 19 Oct 2025 16:22:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-557770
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                2172c73a-e4ea-49ca-bef8-694dddc2eb52
	  Boot ID:                    74c6737e-3c93-45ee-b810-4373d2d70b9b
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m43s
	  default                     cloud-spanner-emulator-86bd5cbb97-w5gdv      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m10s
	  default                     hello-world-app-5d498dc89-98d9j              0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  gadget                      gadget-jpd5t                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m9s
	  gcp-auth                    gcp-auth-78565c9fb4-d8qwj                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-jcfrv    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         4m9s
	  kube-system                 amd-gpu-device-plugin-66kws                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m29s
	  kube-system                 coredns-66bc5c9577-2p98v                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m11s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m9s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m9s
	  kube-system                 csi-hostpathplugin-vvt5x                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m29s
	  kube-system                 etcd-addons-557770                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m16s
	  kube-system                 kindnet-qbbdx                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m11s
	  kube-system                 kube-apiserver-addons-557770                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m16s
	  kube-system                 kube-controller-manager-addons-557770        200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m16s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m10s
	  kube-system                 kube-proxy-zp9mk                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 kube-scheduler-addons-557770                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m16s
	  kube-system                 metrics-server-85b7d694d7-6qb49              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         4m9s
	  kube-system                 nvidia-device-plugin-daemonset-5d5sr         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m29s
	  kube-system                 registry-6b586f9694-fcnms                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m9s
	  kube-system                 registry-creds-764b6fb674-9zcvj              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m10s
	  kube-system                 registry-proxy-cbqn4                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m29s
	  kube-system                 snapshot-controller-7d9fbc56b8-7w96c         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m9s
	  kube-system                 snapshot-controller-7d9fbc56b8-g8g8j         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m9s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m10s
	  local-path-storage          local-path-provisioner-648f6765c9-gsfrf      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m9s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-g4ltn               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     4m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m9s   kube-proxy       
	  Normal  Starting                 4m16s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m16s  kubelet          Node addons-557770 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m16s  kubelet          Node addons-557770 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m16s  kubelet          Node addons-557770 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m12s  node-controller  Node addons-557770 event: Registered Node addons-557770 in Controller
	  Normal  NodeReady                3m30s  kubelet          Node addons-557770 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.102617] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027869] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.458770] kauditd_printk_skb: 47 callbacks suppressed
	[Oct19 16:23] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.054620] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023908] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023878] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023848] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023943] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +2.047764] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +4.031517] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +8.127172] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[ +16.382276] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[Oct19 16:24] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	
	
	==> etcd [49e88f9620ecba4c70cd7b60c353fac9c19df294f9e1cedfb43e5e9d57bb6468] <==
	{"level":"warn","ts":"2025-10-19T16:21:50.700957Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40892","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-19T16:22:27.314619Z","caller":"traceutil/trace.go:172","msg":"trace[27266233] transaction","detail":"{read_only:false; response_revision:1094; number_of_response:1; }","duration":"100.68502ms","start":"2025-10-19T16:22:27.213917Z","end":"2025-10-19T16:22:27.314602Z","steps":["trace[27266233] 'process raft request'  (duration: 100.554636ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T16:22:29.470164Z","caller":"traceutil/trace.go:172","msg":"trace[496259861] linearizableReadLoop","detail":"{readStateIndex:1125; appliedIndex:1125; }","duration":"123.953446ms","start":"2025-10-19T16:22:29.346194Z","end":"2025-10-19T16:22:29.470148Z","steps":["trace[496259861] 'read index received'  (duration: 123.947823ms)","trace[496259861] 'applied index is now lower than readState.Index'  (duration: 4.932µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-19T16:22:29.470286Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"124.071233ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-19T16:22:29.470347Z","caller":"traceutil/trace.go:172","msg":"trace[525323882] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1095; }","duration":"124.154911ms","start":"2025-10-19T16:22:29.346184Z","end":"2025-10-19T16:22:29.470339Z","steps":["trace[525323882] 'agreement among raft nodes before linearized reading'  (duration: 124.045209ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-19T16:22:29.470390Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"124.160867ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-19T16:22:29.470439Z","caller":"traceutil/trace.go:172","msg":"trace[700802004] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1096; }","duration":"124.217446ms","start":"2025-10-19T16:22:29.346211Z","end":"2025-10-19T16:22:29.470429Z","steps":["trace[700802004] 'agreement among raft nodes before linearized reading'  (duration: 124.141555ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T16:22:29.470435Z","caller":"traceutil/trace.go:172","msg":"trace[1970042286] transaction","detail":"{read_only:false; response_revision:1096; number_of_response:1; }","duration":"142.154101ms","start":"2025-10-19T16:22:29.328269Z","end":"2025-10-19T16:22:29.470424Z","steps":["trace[1970042286] 'process raft request'  (duration: 141.959559ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-19T16:22:32.028513Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"182.141781ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-10-19T16:22:32.028543Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"182.172698ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-19T16:22:32.028590Z","caller":"traceutil/trace.go:172","msg":"trace[951143562] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1110; }","duration":"182.229461ms","start":"2025-10-19T16:22:31.846342Z","end":"2025-10-19T16:22:32.028571Z","steps":["trace[951143562] 'range keys from in-memory index tree'  (duration: 182.070344ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T16:22:32.028592Z","caller":"traceutil/trace.go:172","msg":"trace[985431473] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1110; }","duration":"182.22521ms","start":"2025-10-19T16:22:31.846355Z","end":"2025-10-19T16:22:32.028581Z","steps":["trace[985431473] 'range keys from in-memory index tree'  (duration: 182.114777ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T16:22:35.542402Z","caller":"traceutil/trace.go:172","msg":"trace[2115973208] linearizableReadLoop","detail":"{readStateIndex:1156; appliedIndex:1156; }","duration":"103.454415ms","start":"2025-10-19T16:22:35.438918Z","end":"2025-10-19T16:22:35.542373Z","steps":["trace[2115973208] 'read index received'  (duration: 103.448276ms)","trace[2115973208] 'applied index is now lower than readState.Index'  (duration: 5.302µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-19T16:22:35.542574Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"103.633387ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-19T16:22:35.542605Z","caller":"traceutil/trace.go:172","msg":"trace[798378256] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1124; }","duration":"103.683562ms","start":"2025-10-19T16:22:35.438914Z","end":"2025-10-19T16:22:35.542598Z","steps":["trace[798378256] 'agreement among raft nodes before linearized reading'  (duration: 103.551957ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T16:22:35.542670Z","caller":"traceutil/trace.go:172","msg":"trace[1300325174] transaction","detail":"{read_only:false; response_revision:1125; number_of_response:1; }","duration":"149.169408ms","start":"2025-10-19T16:22:35.393482Z","end":"2025-10-19T16:22:35.542651Z","steps":["trace[1300325174] 'process raft request'  (duration: 149.06006ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T16:22:35.562347Z","caller":"traceutil/trace.go:172","msg":"trace[270939368] transaction","detail":"{read_only:false; response_revision:1126; number_of_response:1; }","duration":"165.856008ms","start":"2025-10-19T16:22:35.396471Z","end":"2025-10-19T16:22:35.562327Z","steps":["trace[270939368] 'process raft request'  (duration: 165.745006ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-19T16:22:48.141167Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"179.364393ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-19T16:22:48.141247Z","caller":"traceutil/trace.go:172","msg":"trace[281555488] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1207; }","duration":"179.461896ms","start":"2025-10-19T16:22:47.961770Z","end":"2025-10-19T16:22:48.141232Z","steps":["trace[281555488] 'agreement among raft nodes before linearized reading'  (duration: 58.975108ms)","trace[281555488] 'range keys from in-memory index tree'  (duration: 120.347983ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-19T16:22:48.141903Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"120.540697ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128040740583415926 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1201 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-19T16:22:48.141991Z","caller":"traceutil/trace.go:172","msg":"trace[1094269978] transaction","detail":"{read_only:false; response_revision:1208; number_of_response:1; }","duration":"253.013031ms","start":"2025-10-19T16:22:47.888961Z","end":"2025-10-19T16:22:48.141974Z","steps":["trace[1094269978] 'process raft request'  (duration: 131.794385ms)","trace[1094269978] 'compare'  (duration: 120.44973ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-19T16:23:19.001583Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"100.062025ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/hpvc\" limit:1 ","response":"range_response_count:1 size:822"}
	{"level":"info","ts":"2025-10-19T16:23:19.001640Z","caller":"traceutil/trace.go:172","msg":"trace[1340997627] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/hpvc; range_end:; response_count:1; response_revision:1365; }","duration":"100.140278ms","start":"2025-10-19T16:23:18.901490Z","end":"2025-10-19T16:23:19.001630Z","steps":["trace[1340997627] 'agreement among raft nodes before linearized reading'  (duration: 99.967719ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T16:23:19.001689Z","caller":"traceutil/trace.go:172","msg":"trace[1793833200] transaction","detail":"{read_only:false; response_revision:1366; number_of_response:1; }","duration":"105.852393ms","start":"2025-10-19T16:23:18.895818Z","end":"2025-10-19T16:23:19.001671Z","steps":["trace[1793833200] 'process raft request'  (duration: 105.699103ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T16:24:00.434320Z","caller":"traceutil/trace.go:172","msg":"trace[1752624065] transaction","detail":"{read_only:false; response_revision:1511; number_of_response:1; }","duration":"104.815464ms","start":"2025-10-19T16:24:00.329477Z","end":"2025-10-19T16:24:00.434292Z","steps":["trace[1752624065] 'process raft request'  (duration: 104.541491ms)"],"step_count":1}
	
	
	==> gcp-auth [efa983b0c093864ff02f6d7eca25b115176c595c66ca518fe543533c863e46ce] <==
	2025/10/19 16:22:42 GCP Auth Webhook started!
	2025/10/19 16:22:48 Ready to marshal response ...
	2025/10/19 16:22:48 Ready to write response ...
	2025/10/19 16:22:49 Ready to marshal response ...
	2025/10/19 16:22:49 Ready to write response ...
	2025/10/19 16:22:49 Ready to marshal response ...
	2025/10/19 16:22:49 Ready to write response ...
	2025/10/19 16:22:56 Ready to marshal response ...
	2025/10/19 16:22:56 Ready to write response ...
	2025/10/19 16:22:56 Ready to marshal response ...
	2025/10/19 16:22:56 Ready to write response ...
	2025/10/19 16:23:04 Ready to marshal response ...
	2025/10/19 16:23:04 Ready to write response ...
	2025/10/19 16:23:06 Ready to marshal response ...
	2025/10/19 16:23:06 Ready to write response ...
	2025/10/19 16:23:07 Ready to marshal response ...
	2025/10/19 16:23:07 Ready to write response ...
	2025/10/19 16:23:27 Ready to marshal response ...
	2025/10/19 16:23:27 Ready to write response ...
	2025/10/19 16:23:40 Ready to marshal response ...
	2025/10/19 16:23:40 Ready to write response ...
	2025/10/19 16:25:31 Ready to marshal response ...
	2025/10/19 16:25:31 Ready to write response ...
	
	
	==> kernel <==
	 16:25:32 up 7 min,  0 user,  load average: 0.35, 0.76, 0.41
	Linux addons-557770 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [00766cde139826cec0b8b6d979bc06db1a8e4c72cc21abb45e687eb3efc6283c] <==
	I1019 16:23:32.790629       1 main.go:301] handling current node
	I1019 16:23:42.789773       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:23:42.789816       1 main.go:301] handling current node
	I1019 16:23:52.789560       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:23:52.789603       1 main.go:301] handling current node
	I1019 16:24:02.790319       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:24:02.790348       1 main.go:301] handling current node
	I1019 16:24:12.793314       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:24:12.793346       1 main.go:301] handling current node
	I1019 16:24:22.796321       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:24:22.796351       1 main.go:301] handling current node
	I1019 16:24:32.790158       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:24:32.790208       1 main.go:301] handling current node
	I1019 16:24:42.789880       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:24:42.789935       1 main.go:301] handling current node
	I1019 16:24:52.790433       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:24:52.790472       1 main.go:301] handling current node
	I1019 16:25:02.789630       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:25:02.789663       1 main.go:301] handling current node
	I1019 16:25:12.796709       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:25:12.796744       1 main.go:301] handling current node
	I1019 16:25:22.790530       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:25:22.790585       1 main.go:301] handling current node
	I1019 16:25:32.790323       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:25:32.790366       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7ea12626c6ada213850e1cea66bee9586ca3b56572fe37a2f1d96af64e737eb5] <==
	W1019 16:21:50.700949       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1019 16:22:03.015850       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.110.179.250:443: connect: connection refused
	E1019 16:22:03.015895       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.110.179.250:443: connect: connection refused" logger="UnhandledError"
	W1019 16:22:03.015930       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.110.179.250:443: connect: connection refused
	E1019 16:22:03.015957       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.110.179.250:443: connect: connection refused" logger="UnhandledError"
	W1019 16:22:03.036140       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.110.179.250:443: connect: connection refused
	E1019 16:22:03.036179       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.110.179.250:443: connect: connection refused" logger="UnhandledError"
	W1019 16:22:03.038881       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.110.179.250:443: connect: connection refused
	E1019 16:22:03.039004       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.110.179.250:443: connect: connection refused" logger="UnhandledError"
	W1019 16:22:14.312054       1 handler_proxy.go:99] no RequestInfo found in the context
	E1019 16:22:14.312054       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.253.228:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.253.228:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.253.228:443: connect: connection refused" logger="UnhandledError"
	E1019 16:22:14.312147       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1019 16:22:14.312499       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.253.228:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.253.228:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.253.228:443: connect: connection refused" logger="UnhandledError"
	E1019 16:22:14.318300       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.253.228:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.253.228:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.253.228:443: connect: connection refused" logger="UnhandledError"
	E1019 16:22:14.338957       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.253.228:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.253.228:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.253.228:443: connect: connection refused" logger="UnhandledError"
	I1019 16:22:14.409745       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1019 16:22:56.191227       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:37224: use of closed network connection
	E1019 16:22:56.348933       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:37266: use of closed network connection
	I1019 16:23:07.350169       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1019 16:23:07.560329       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.77.235"}
	I1019 16:23:36.087527       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1019 16:25:31.100129       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.111.186.150"}
	
	
	==> kube-controller-manager [75207afa634f810f84239fed0d54861fdc8bcb8c5f5a1e05bb98430d84584d65] <==
	I1019 16:21:20.655476       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1019 16:21:20.655544       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-557770"
	I1019 16:21:20.655546       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1019 16:21:20.655600       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1019 16:21:20.655791       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1019 16:21:20.655876       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1019 16:21:20.655886       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1019 16:21:20.655945       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1019 16:21:20.656404       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1019 16:21:20.656430       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1019 16:21:20.656492       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1019 16:21:20.656510       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1019 16:21:20.658843       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1019 16:21:20.660965       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 16:21:20.664524       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1019 16:21:20.676777       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1019 16:21:23.163671       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1019 16:21:50.666232       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1019 16:21:50.666343       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1019 16:21:50.666383       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1019 16:21:50.685739       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1019 16:21:50.689163       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1019 16:21:50.766679       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 16:21:50.790054       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 16:22:05.660764       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [91694d16fb2b0ece4c3071344d073973e3e529ad5d23e63ffcbb1aa13844606e] <==
	I1019 16:21:22.385790       1 server_linux.go:53] "Using iptables proxy"
	I1019 16:21:22.694681       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 16:21:22.797483       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 16:21:22.798166       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1019 16:21:22.800178       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 16:21:22.999701       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 16:21:22.999781       1 server_linux.go:132] "Using iptables Proxier"
	I1019 16:21:23.016745       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 16:21:23.026386       1 server.go:527] "Version info" version="v1.34.1"
	I1019 16:21:23.026770       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 16:21:23.037478       1 config.go:200] "Starting service config controller"
	I1019 16:21:23.037558       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 16:21:23.037606       1 config.go:106] "Starting endpoint slice config controller"
	I1019 16:21:23.037630       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 16:21:23.037663       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 16:21:23.037686       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 16:21:23.040175       1 config.go:309] "Starting node config controller"
	I1019 16:21:23.040257       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 16:21:23.040288       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 16:21:23.137786       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 16:21:23.138049       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 16:21:23.138361       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [64ffa4d775be3d28668524f2c7dd8e1c6ad5eeba22fb96d610f66530c5502bfd] <==
	E1019 16:21:13.680519       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1019 16:21:13.680648       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1019 16:21:13.680692       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1019 16:21:13.680823       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 16:21:13.680953       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1019 16:21:13.681052       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1019 16:21:13.681096       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1019 16:21:14.485340       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1019 16:21:14.487591       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 16:21:14.557104       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1019 16:21:14.560526       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1019 16:21:14.597605       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1019 16:21:14.601876       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1019 16:21:14.684886       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1019 16:21:14.688554       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1019 16:21:14.759790       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1019 16:21:14.785614       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1019 16:21:14.801798       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1019 16:21:14.816953       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1019 16:21:14.905323       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1019 16:21:14.909235       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1019 16:21:14.911125       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1019 16:21:14.935228       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1019 16:21:14.957224       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1019 16:21:16.777723       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 19 16:23:48 addons-557770 kubelet[1306]: I1019 16:23:48.531129    1306 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^f88815f4-ad07-11f0-bea3-7e8ab9bcb427\") pod \"f5dab36a-3627-43be-85ec-d5a5467681ed\" (UID: \"f5dab36a-3627-43be-85ec-d5a5467681ed\") "
	Oct 19 16:23:48 addons-557770 kubelet[1306]: I1019 16:23:48.531209    1306 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/f5dab36a-3627-43be-85ec-d5a5467681ed-gcp-creds\") pod \"f5dab36a-3627-43be-85ec-d5a5467681ed\" (UID: \"f5dab36a-3627-43be-85ec-d5a5467681ed\") "
	Oct 19 16:23:48 addons-557770 kubelet[1306]: I1019 16:23:48.531404    1306 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5dab36a-3627-43be-85ec-d5a5467681ed-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "f5dab36a-3627-43be-85ec-d5a5467681ed" (UID: "f5dab36a-3627-43be-85ec-d5a5467681ed"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Oct 19 16:23:48 addons-557770 kubelet[1306]: I1019 16:23:48.533419    1306 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5dab36a-3627-43be-85ec-d5a5467681ed-kube-api-access-rz6dh" (OuterVolumeSpecName: "kube-api-access-rz6dh") pod "f5dab36a-3627-43be-85ec-d5a5467681ed" (UID: "f5dab36a-3627-43be-85ec-d5a5467681ed"). InnerVolumeSpecName "kube-api-access-rz6dh". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 19 16:23:48 addons-557770 kubelet[1306]: I1019 16:23:48.534149    1306 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^f88815f4-ad07-11f0-bea3-7e8ab9bcb427" (OuterVolumeSpecName: "task-pv-storage") pod "f5dab36a-3627-43be-85ec-d5a5467681ed" (UID: "f5dab36a-3627-43be-85ec-d5a5467681ed"). InnerVolumeSpecName "pvc-4e4b5af6-e6a3-404e-a45f-770b1b7227f8". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Oct 19 16:23:48 addons-557770 kubelet[1306]: I1019 16:23:48.632290    1306 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/f5dab36a-3627-43be-85ec-d5a5467681ed-gcp-creds\") on node \"addons-557770\" DevicePath \"\""
	Oct 19 16:23:48 addons-557770 kubelet[1306]: I1019 16:23:48.632326    1306 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rz6dh\" (UniqueName: \"kubernetes.io/projected/f5dab36a-3627-43be-85ec-d5a5467681ed-kube-api-access-rz6dh\") on node \"addons-557770\" DevicePath \"\""
	Oct 19 16:23:48 addons-557770 kubelet[1306]: I1019 16:23:48.632360    1306 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-4e4b5af6-e6a3-404e-a45f-770b1b7227f8\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^f88815f4-ad07-11f0-bea3-7e8ab9bcb427\") on node \"addons-557770\" "
	Oct 19 16:23:48 addons-557770 kubelet[1306]: I1019 16:23:48.636696    1306 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-4e4b5af6-e6a3-404e-a45f-770b1b7227f8" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^f88815f4-ad07-11f0-bea3-7e8ab9bcb427") on node "addons-557770"
	Oct 19 16:23:48 addons-557770 kubelet[1306]: I1019 16:23:48.733391    1306 reconciler_common.go:299] "Volume detached for volume \"pvc-4e4b5af6-e6a3-404e-a45f-770b1b7227f8\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^f88815f4-ad07-11f0-bea3-7e8ab9bcb427\") on node \"addons-557770\" DevicePath \"\""
	Oct 19 16:23:48 addons-557770 kubelet[1306]: I1019 16:23:48.760256    1306 scope.go:117] "RemoveContainer" containerID="84eb46cc349793e82b501360d6921a5cb08055a61587959d59c60a7028542b3e"
	Oct 19 16:23:48 addons-557770 kubelet[1306]: I1019 16:23:48.769414    1306 scope.go:117] "RemoveContainer" containerID="84eb46cc349793e82b501360d6921a5cb08055a61587959d59c60a7028542b3e"
	Oct 19 16:23:48 addons-557770 kubelet[1306]: E1019 16:23:48.769852    1306 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84eb46cc349793e82b501360d6921a5cb08055a61587959d59c60a7028542b3e\": container with ID starting with 84eb46cc349793e82b501360d6921a5cb08055a61587959d59c60a7028542b3e not found: ID does not exist" containerID="84eb46cc349793e82b501360d6921a5cb08055a61587959d59c60a7028542b3e"
	Oct 19 16:23:48 addons-557770 kubelet[1306]: I1019 16:23:48.769903    1306 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84eb46cc349793e82b501360d6921a5cb08055a61587959d59c60a7028542b3e"} err="failed to get container status \"84eb46cc349793e82b501360d6921a5cb08055a61587959d59c60a7028542b3e\": rpc error: code = NotFound desc = could not find container \"84eb46cc349793e82b501360d6921a5cb08055a61587959d59c60a7028542b3e\": container with ID starting with 84eb46cc349793e82b501360d6921a5cb08055a61587959d59c60a7028542b3e not found: ID does not exist"
	Oct 19 16:23:50 addons-557770 kubelet[1306]: I1019 16:23:50.098810    1306 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5dab36a-3627-43be-85ec-d5a5467681ed" path="/var/lib/kubelet/pods/f5dab36a-3627-43be-85ec-d5a5467681ed/volumes"
	Oct 19 16:23:58 addons-557770 kubelet[1306]: I1019 16:23:58.096038    1306 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-5d5sr" secret="" err="secret \"gcp-auth\" not found"
	Oct 19 16:24:02 addons-557770 kubelet[1306]: I1019 16:24:02.095772    1306 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-cbqn4" secret="" err="secret \"gcp-auth\" not found"
	Oct 19 16:24:06 addons-557770 kubelet[1306]: E1019 16:24:06.031283    1306 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[gcr-creds], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/registry-creds-764b6fb674-9zcvj" podUID="ae105e8b-c740-4d2e-8cbf-ac8ec523125c"
	Oct 19 16:24:22 addons-557770 kubelet[1306]: I1019 16:24:22.901204    1306 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-creds-764b6fb674-9zcvj" podStartSLOduration=179.794959565 podStartE2EDuration="3m0.901183865s" podCreationTimestamp="2025-10-19 16:21:22 +0000 UTC" firstStartedPulling="2025-10-19 16:24:21.118961612 +0000 UTC m=+185.106302685" lastFinishedPulling="2025-10-19 16:24:22.225185896 +0000 UTC m=+186.212526985" observedRunningTime="2025-10-19 16:24:22.900765078 +0000 UTC m=+186.888106192" watchObservedRunningTime="2025-10-19 16:24:22.901183865 +0000 UTC m=+186.888524958"
	Oct 19 16:24:32 addons-557770 kubelet[1306]: I1019 16:24:32.096106    1306 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-fcnms" secret="" err="secret \"gcp-auth\" not found"
	Oct 19 16:24:56 addons-557770 kubelet[1306]: I1019 16:24:56.097183    1306 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-66kws" secret="" err="secret \"gcp-auth\" not found"
	Oct 19 16:25:14 addons-557770 kubelet[1306]: I1019 16:25:14.096524    1306 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-5d5sr" secret="" err="secret \"gcp-auth\" not found"
	Oct 19 16:25:23 addons-557770 kubelet[1306]: I1019 16:25:23.095508    1306 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-cbqn4" secret="" err="secret \"gcp-auth\" not found"
	Oct 19 16:25:31 addons-557770 kubelet[1306]: I1019 16:25:31.137031    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/9cde65d3-ea27-4111-9784-fa379c7cd11a-gcp-creds\") pod \"hello-world-app-5d498dc89-98d9j\" (UID: \"9cde65d3-ea27-4111-9784-fa379c7cd11a\") " pod="default/hello-world-app-5d498dc89-98d9j"
	Oct 19 16:25:31 addons-557770 kubelet[1306]: I1019 16:25:31.137214    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7tsm\" (UniqueName: \"kubernetes.io/projected/9cde65d3-ea27-4111-9784-fa379c7cd11a-kube-api-access-r7tsm\") pod \"hello-world-app-5d498dc89-98d9j\" (UID: \"9cde65d3-ea27-4111-9784-fa379c7cd11a\") " pod="default/hello-world-app-5d498dc89-98d9j"
	
	
	==> storage-provisioner [8d61047c5353c2c688e2f172eaaf6ad4bd6c3816d4a19f2de39251e386867693] <==
	W1019 16:25:08.731319       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:25:10.734009       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:25:10.739493       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:25:12.742427       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:25:12.747773       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:25:14.751510       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:25:14.755690       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:25:16.759227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:25:16.763638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:25:18.766847       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:25:18.771160       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:25:20.774342       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:25:20.779164       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:25:22.782732       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:25:22.786878       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:25:24.790920       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:25:24.796193       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:25:26.799694       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:25:26.804124       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:25:28.807174       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:25:28.812179       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:25:30.815298       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:25:30.820380       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:25:32.823182       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:25:32.826725       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-557770 -n addons-557770
helpers_test.go:269: (dbg) Run:  kubectl --context addons-557770 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-7tns9 ingress-nginx-admission-patch-kb26q
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-557770 describe pod ingress-nginx-admission-create-7tns9 ingress-nginx-admission-patch-kb26q
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-557770 describe pod ingress-nginx-admission-create-7tns9 ingress-nginx-admission-patch-kb26q: exit status 1 (61.408377ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-7tns9" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-kb26q" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-557770 describe pod ingress-nginx-admission-create-7tns9 ingress-nginx-admission-patch-kb26q: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-557770 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-557770 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (236.650526ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 16:25:33.674577   23571 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:25:33.674891   23571 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:25:33.674902   23571 out.go:374] Setting ErrFile to fd 2...
	I1019 16:25:33.674906   23571 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:25:33.675170   23571 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3731/.minikube/bin
	I1019 16:25:33.675489   23571 mustload.go:66] Loading cluster: addons-557770
	I1019 16:25:33.675870   23571 config.go:182] Loaded profile config "addons-557770": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:25:33.675888   23571 addons.go:607] checking whether the cluster is paused
	I1019 16:25:33.675986   23571 config.go:182] Loaded profile config "addons-557770": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:25:33.676000   23571 host.go:66] Checking if "addons-557770" exists ...
	I1019 16:25:33.676420   23571 cli_runner.go:164] Run: docker container inspect addons-557770 --format={{.State.Status}}
	I1019 16:25:33.695868   23571 ssh_runner.go:195] Run: systemctl --version
	I1019 16:25:33.695925   23571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:25:33.714870   23571 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/addons-557770/id_rsa Username:docker}
	I1019 16:25:33.813063   23571 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 16:25:33.813186   23571 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 16:25:33.843285   23571 cri.go:89] found id: "088d1217f9c6a9dc8d121ef1b7b30a306b811d9cd63f476361917d6139977562"
	I1019 16:25:33.843317   23571 cri.go:89] found id: "9d790d16e895919a1170fc8db9b5e7568e08763e98bd7e71211780308a212097"
	I1019 16:25:33.843321   23571 cri.go:89] found id: "5888ac56628ff93145c6aece5f1120cd8eb8decc9375b9ca12c2230ec4c1defb"
	I1019 16:25:33.843324   23571 cri.go:89] found id: "50a236ad43b3d311660fac7a7b4f30d2e7c648b9652ad78e1ba950060276f923"
	I1019 16:25:33.843327   23571 cri.go:89] found id: "05400eb2fd5eb5dab037918d1625bce04346352c5eb622cb825bc5d8be2d837e"
	I1019 16:25:33.843331   23571 cri.go:89] found id: "2e93d36349d27da10dcd3ea12fffbcf50506c87892ebdc8c2ea984ea7cc55440"
	I1019 16:25:33.843333   23571 cri.go:89] found id: "c503d7edb96f3d0cc1492dbef9c424367ec8ca4b0115a5c993d0ff8d12eedf93"
	I1019 16:25:33.843336   23571 cri.go:89] found id: "bfa49cfea4019265b1d3841ebbabe50ea8270a753eb9f5e68565515d284992d5"
	I1019 16:25:33.843338   23571 cri.go:89] found id: "aeccc8f632779e72d498e4e2c918b7701815fd1595f5123cba0d3e3306a5b8fa"
	I1019 16:25:33.843353   23571 cri.go:89] found id: "d3273937d7efc2090dbf92878ee357891bac8e1fd1ac928ff06fa33207f8acd6"
	I1019 16:25:33.843356   23571 cri.go:89] found id: "6ef8a71fe5d3990e63bf621adc1773300aea754a4ae4ba88cb908af6033475fc"
	I1019 16:25:33.843358   23571 cri.go:89] found id: "6e2091eec84de97496ef71ec05866a5e84fc72136ebd8fba19aae91efa63d9e3"
	I1019 16:25:33.843361   23571 cri.go:89] found id: "037c0332147ce352c19f75a68693f361440e2fb38139735ad856f224c9190c1f"
	I1019 16:25:33.843363   23571 cri.go:89] found id: "65863d6eb03db19db922d886803cf53a0cb4c1aa50b354e37b1d7abdbe4cd53e"
	I1019 16:25:33.843366   23571 cri.go:89] found id: "28dca4a98797d39e429fa32ded7e16d2d4d1e675523de1c726f9a2abb9cf1ec4"
	I1019 16:25:33.843372   23571 cri.go:89] found id: "893241a14d701c6bdcba867e2fd1fe8a4cd5738e5119b7e04c7002b291849f9d"
	I1019 16:25:33.843377   23571 cri.go:89] found id: "b929bd44832f67814ec657bdb9e012b7c97cd64985f113692cbc885e53eeb5ed"
	I1019 16:25:33.843381   23571 cri.go:89] found id: "ef3b3e7a4894884f7c8bbd755c56a7700dc943a7928aa0e52f4dbdc81806fd1b"
	I1019 16:25:33.843383   23571 cri.go:89] found id: "8d61047c5353c2c688e2f172eaaf6ad4bd6c3816d4a19f2de39251e386867693"
	I1019 16:25:33.843385   23571 cri.go:89] found id: "91694d16fb2b0ece4c3071344d073973e3e529ad5d23e63ffcbb1aa13844606e"
	I1019 16:25:33.843388   23571 cri.go:89] found id: "00766cde139826cec0b8b6d979bc06db1a8e4c72cc21abb45e687eb3efc6283c"
	I1019 16:25:33.843390   23571 cri.go:89] found id: "49e88f9620ecba4c70cd7b60c353fac9c19df294f9e1cedfb43e5e9d57bb6468"
	I1019 16:25:33.843393   23571 cri.go:89] found id: "64ffa4d775be3d28668524f2c7dd8e1c6ad5eeba22fb96d610f66530c5502bfd"
	I1019 16:25:33.843395   23571 cri.go:89] found id: "7ea12626c6ada213850e1cea66bee9586ca3b56572fe37a2f1d96af64e737eb5"
	I1019 16:25:33.843398   23571 cri.go:89] found id: "75207afa634f810f84239fed0d54861fdc8bcb8c5f5a1e05bb98430d84584d65"
	I1019 16:25:33.843400   23571 cri.go:89] found id: ""
	I1019 16:25:33.843444   23571 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 16:25:33.857970   23571 out.go:203] 
	W1019 16:25:33.859362   23571 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:25:33Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:25:33Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 16:25:33.859391   23571 out.go:285] * 
	* 
	W1019 16:25:33.862476   23571 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 16:25:33.864029   23571 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-557770 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-557770 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-557770 addons disable ingress --alsologtostderr -v=1: exit status 11 (239.54789ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 16:25:33.912376   23630 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:25:33.912703   23630 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:25:33.912715   23630 out.go:374] Setting ErrFile to fd 2...
	I1019 16:25:33.912719   23630 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:25:33.912941   23630 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3731/.minikube/bin
	I1019 16:25:33.913243   23630 mustload.go:66] Loading cluster: addons-557770
	I1019 16:25:33.913621   23630 config.go:182] Loaded profile config "addons-557770": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:25:33.913638   23630 addons.go:607] checking whether the cluster is paused
	I1019 16:25:33.913721   23630 config.go:182] Loaded profile config "addons-557770": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:25:33.913733   23630 host.go:66] Checking if "addons-557770" exists ...
	I1019 16:25:33.914128   23630 cli_runner.go:164] Run: docker container inspect addons-557770 --format={{.State.Status}}
	I1019 16:25:33.933108   23630 ssh_runner.go:195] Run: systemctl --version
	I1019 16:25:33.933172   23630 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:25:33.952735   23630 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/addons-557770/id_rsa Username:docker}
	I1019 16:25:34.049943   23630 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 16:25:34.050032   23630 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 16:25:34.081185   23630 cri.go:89] found id: "088d1217f9c6a9dc8d121ef1b7b30a306b811d9cd63f476361917d6139977562"
	I1019 16:25:34.081231   23630 cri.go:89] found id: "9d790d16e895919a1170fc8db9b5e7568e08763e98bd7e71211780308a212097"
	I1019 16:25:34.081237   23630 cri.go:89] found id: "5888ac56628ff93145c6aece5f1120cd8eb8decc9375b9ca12c2230ec4c1defb"
	I1019 16:25:34.081241   23630 cri.go:89] found id: "50a236ad43b3d311660fac7a7b4f30d2e7c648b9652ad78e1ba950060276f923"
	I1019 16:25:34.081246   23630 cri.go:89] found id: "05400eb2fd5eb5dab037918d1625bce04346352c5eb622cb825bc5d8be2d837e"
	I1019 16:25:34.081252   23630 cri.go:89] found id: "2e93d36349d27da10dcd3ea12fffbcf50506c87892ebdc8c2ea984ea7cc55440"
	I1019 16:25:34.081256   23630 cri.go:89] found id: "c503d7edb96f3d0cc1492dbef9c424367ec8ca4b0115a5c993d0ff8d12eedf93"
	I1019 16:25:34.081259   23630 cri.go:89] found id: "bfa49cfea4019265b1d3841ebbabe50ea8270a753eb9f5e68565515d284992d5"
	I1019 16:25:34.081263   23630 cri.go:89] found id: "aeccc8f632779e72d498e4e2c918b7701815fd1595f5123cba0d3e3306a5b8fa"
	I1019 16:25:34.081289   23630 cri.go:89] found id: "d3273937d7efc2090dbf92878ee357891bac8e1fd1ac928ff06fa33207f8acd6"
	I1019 16:25:34.081298   23630 cri.go:89] found id: "6ef8a71fe5d3990e63bf621adc1773300aea754a4ae4ba88cb908af6033475fc"
	I1019 16:25:34.081302   23630 cri.go:89] found id: "6e2091eec84de97496ef71ec05866a5e84fc72136ebd8fba19aae91efa63d9e3"
	I1019 16:25:34.081307   23630 cri.go:89] found id: "037c0332147ce352c19f75a68693f361440e2fb38139735ad856f224c9190c1f"
	I1019 16:25:34.081311   23630 cri.go:89] found id: "65863d6eb03db19db922d886803cf53a0cb4c1aa50b354e37b1d7abdbe4cd53e"
	I1019 16:25:34.081314   23630 cri.go:89] found id: "28dca4a98797d39e429fa32ded7e16d2d4d1e675523de1c726f9a2abb9cf1ec4"
	I1019 16:25:34.081330   23630 cri.go:89] found id: "893241a14d701c6bdcba867e2fd1fe8a4cd5738e5119b7e04c7002b291849f9d"
	I1019 16:25:34.081338   23630 cri.go:89] found id: "b929bd44832f67814ec657bdb9e012b7c97cd64985f113692cbc885e53eeb5ed"
	I1019 16:25:34.081342   23630 cri.go:89] found id: "ef3b3e7a4894884f7c8bbd755c56a7700dc943a7928aa0e52f4dbdc81806fd1b"
	I1019 16:25:34.081344   23630 cri.go:89] found id: "8d61047c5353c2c688e2f172eaaf6ad4bd6c3816d4a19f2de39251e386867693"
	I1019 16:25:34.081346   23630 cri.go:89] found id: "91694d16fb2b0ece4c3071344d073973e3e529ad5d23e63ffcbb1aa13844606e"
	I1019 16:25:34.081350   23630 cri.go:89] found id: "00766cde139826cec0b8b6d979bc06db1a8e4c72cc21abb45e687eb3efc6283c"
	I1019 16:25:34.081354   23630 cri.go:89] found id: "49e88f9620ecba4c70cd7b60c353fac9c19df294f9e1cedfb43e5e9d57bb6468"
	I1019 16:25:34.081361   23630 cri.go:89] found id: "64ffa4d775be3d28668524f2c7dd8e1c6ad5eeba22fb96d610f66530c5502bfd"
	I1019 16:25:34.081366   23630 cri.go:89] found id: "7ea12626c6ada213850e1cea66bee9586ca3b56572fe37a2f1d96af64e737eb5"
	I1019 16:25:34.081373   23630 cri.go:89] found id: "75207afa634f810f84239fed0d54861fdc8bcb8c5f5a1e05bb98430d84584d65"
	I1019 16:25:34.081377   23630 cri.go:89] found id: ""
	I1019 16:25:34.081436   23630 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 16:25:34.096151   23630 out.go:203] 
	W1019 16:25:34.097830   23630 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:25:34Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:25:34Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 16:25:34.097856   23630 out.go:285] * 
	* 
	W1019 16:25:34.101799   23630 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 16:25:34.103234   23630 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-557770 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (147.01s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-jpd5t" [b9256ae3-f5b7-41cb-9563-929e0431b101] Running
2025/10/19 16:23:09 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.00419323s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-557770 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-557770 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (241.728131ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 16:23:14.933960   20226 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:23:14.934287   20226 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:23:14.934298   20226 out.go:374] Setting ErrFile to fd 2...
	I1019 16:23:14.934302   20226 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:23:14.934494   20226 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3731/.minikube/bin
	I1019 16:23:14.934755   20226 mustload.go:66] Loading cluster: addons-557770
	I1019 16:23:14.935150   20226 config.go:182] Loaded profile config "addons-557770": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:23:14.935168   20226 addons.go:607] checking whether the cluster is paused
	I1019 16:23:14.935252   20226 config.go:182] Loaded profile config "addons-557770": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:23:14.935263   20226 host.go:66] Checking if "addons-557770" exists ...
	I1019 16:23:14.935633   20226 cli_runner.go:164] Run: docker container inspect addons-557770 --format={{.State.Status}}
	I1019 16:23:14.954204   20226 ssh_runner.go:195] Run: systemctl --version
	I1019 16:23:14.954252   20226 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:23:14.974359   20226 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/addons-557770/id_rsa Username:docker}
	I1019 16:23:15.071976   20226 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 16:23:15.072095   20226 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 16:23:15.102458   20226 cri.go:89] found id: "9d790d16e895919a1170fc8db9b5e7568e08763e98bd7e71211780308a212097"
	I1019 16:23:15.102502   20226 cri.go:89] found id: "5888ac56628ff93145c6aece5f1120cd8eb8decc9375b9ca12c2230ec4c1defb"
	I1019 16:23:15.102509   20226 cri.go:89] found id: "50a236ad43b3d311660fac7a7b4f30d2e7c648b9652ad78e1ba950060276f923"
	I1019 16:23:15.102514   20226 cri.go:89] found id: "05400eb2fd5eb5dab037918d1625bce04346352c5eb622cb825bc5d8be2d837e"
	I1019 16:23:15.102517   20226 cri.go:89] found id: "2e93d36349d27da10dcd3ea12fffbcf50506c87892ebdc8c2ea984ea7cc55440"
	I1019 16:23:15.102520   20226 cri.go:89] found id: "c503d7edb96f3d0cc1492dbef9c424367ec8ca4b0115a5c993d0ff8d12eedf93"
	I1019 16:23:15.102523   20226 cri.go:89] found id: "bfa49cfea4019265b1d3841ebbabe50ea8270a753eb9f5e68565515d284992d5"
	I1019 16:23:15.102526   20226 cri.go:89] found id: "aeccc8f632779e72d498e4e2c918b7701815fd1595f5123cba0d3e3306a5b8fa"
	I1019 16:23:15.102528   20226 cri.go:89] found id: "d3273937d7efc2090dbf92878ee357891bac8e1fd1ac928ff06fa33207f8acd6"
	I1019 16:23:15.102564   20226 cri.go:89] found id: "6ef8a71fe5d3990e63bf621adc1773300aea754a4ae4ba88cb908af6033475fc"
	I1019 16:23:15.102567   20226 cri.go:89] found id: "6e2091eec84de97496ef71ec05866a5e84fc72136ebd8fba19aae91efa63d9e3"
	I1019 16:23:15.102569   20226 cri.go:89] found id: "037c0332147ce352c19f75a68693f361440e2fb38139735ad856f224c9190c1f"
	I1019 16:23:15.102571   20226 cri.go:89] found id: "65863d6eb03db19db922d886803cf53a0cb4c1aa50b354e37b1d7abdbe4cd53e"
	I1019 16:23:15.102574   20226 cri.go:89] found id: "28dca4a98797d39e429fa32ded7e16d2d4d1e675523de1c726f9a2abb9cf1ec4"
	I1019 16:23:15.102576   20226 cri.go:89] found id: "893241a14d701c6bdcba867e2fd1fe8a4cd5738e5119b7e04c7002b291849f9d"
	I1019 16:23:15.102588   20226 cri.go:89] found id: "b929bd44832f67814ec657bdb9e012b7c97cd64985f113692cbc885e53eeb5ed"
	I1019 16:23:15.102593   20226 cri.go:89] found id: "ef3b3e7a4894884f7c8bbd755c56a7700dc943a7928aa0e52f4dbdc81806fd1b"
	I1019 16:23:15.102598   20226 cri.go:89] found id: "8d61047c5353c2c688e2f172eaaf6ad4bd6c3816d4a19f2de39251e386867693"
	I1019 16:23:15.102600   20226 cri.go:89] found id: "91694d16fb2b0ece4c3071344d073973e3e529ad5d23e63ffcbb1aa13844606e"
	I1019 16:23:15.102602   20226 cri.go:89] found id: "00766cde139826cec0b8b6d979bc06db1a8e4c72cc21abb45e687eb3efc6283c"
	I1019 16:23:15.102605   20226 cri.go:89] found id: "49e88f9620ecba4c70cd7b60c353fac9c19df294f9e1cedfb43e5e9d57bb6468"
	I1019 16:23:15.102607   20226 cri.go:89] found id: "64ffa4d775be3d28668524f2c7dd8e1c6ad5eeba22fb96d610f66530c5502bfd"
	I1019 16:23:15.102609   20226 cri.go:89] found id: "7ea12626c6ada213850e1cea66bee9586ca3b56572fe37a2f1d96af64e737eb5"
	I1019 16:23:15.102612   20226 cri.go:89] found id: "75207afa634f810f84239fed0d54861fdc8bcb8c5f5a1e05bb98430d84584d65"
	I1019 16:23:15.102614   20226 cri.go:89] found id: ""
	I1019 16:23:15.102657   20226 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 16:23:15.118555   20226 out.go:203] 
	W1019 16:23:15.120951   20226 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:23:15Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:23:15Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 16:23:15.120974   20226 out.go:285] * 
	* 
	W1019 16:23:15.124413   20226 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 16:23:15.126274   20226 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-557770 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.25s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.36s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.684002ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-6qb49" [ccf97c38-af56-4fdc-a1eb-238e1f9c98f7] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004568226s
addons_test.go:463: (dbg) Run:  kubectl --context addons-557770 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-557770 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-557770 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (273.631212ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 16:23:09.667866   19529 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:23:09.668194   19529 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:23:09.668207   19529 out.go:374] Setting ErrFile to fd 2...
	I1019 16:23:09.668213   19529 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:23:09.668443   19529 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3731/.minikube/bin
	I1019 16:23:09.668721   19529 mustload.go:66] Loading cluster: addons-557770
	I1019 16:23:09.669102   19529 config.go:182] Loaded profile config "addons-557770": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:23:09.669123   19529 addons.go:607] checking whether the cluster is paused
	I1019 16:23:09.669222   19529 config.go:182] Loaded profile config "addons-557770": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:23:09.669238   19529 host.go:66] Checking if "addons-557770" exists ...
	I1019 16:23:09.669714   19529 cli_runner.go:164] Run: docker container inspect addons-557770 --format={{.State.Status}}
	I1019 16:23:09.690262   19529 ssh_runner.go:195] Run: systemctl --version
	I1019 16:23:09.690396   19529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:23:09.713545   19529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/addons-557770/id_rsa Username:docker}
	I1019 16:23:09.817472   19529 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 16:23:09.817559   19529 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 16:23:09.853771   19529 cri.go:89] found id: "9d790d16e895919a1170fc8db9b5e7568e08763e98bd7e71211780308a212097"
	I1019 16:23:09.853809   19529 cri.go:89] found id: "5888ac56628ff93145c6aece5f1120cd8eb8decc9375b9ca12c2230ec4c1defb"
	I1019 16:23:09.853814   19529 cri.go:89] found id: "50a236ad43b3d311660fac7a7b4f30d2e7c648b9652ad78e1ba950060276f923"
	I1019 16:23:09.853819   19529 cri.go:89] found id: "05400eb2fd5eb5dab037918d1625bce04346352c5eb622cb825bc5d8be2d837e"
	I1019 16:23:09.853823   19529 cri.go:89] found id: "2e93d36349d27da10dcd3ea12fffbcf50506c87892ebdc8c2ea984ea7cc55440"
	I1019 16:23:09.853829   19529 cri.go:89] found id: "c503d7edb96f3d0cc1492dbef9c424367ec8ca4b0115a5c993d0ff8d12eedf93"
	I1019 16:23:09.853833   19529 cri.go:89] found id: "bfa49cfea4019265b1d3841ebbabe50ea8270a753eb9f5e68565515d284992d5"
	I1019 16:23:09.853838   19529 cri.go:89] found id: "aeccc8f632779e72d498e4e2c918b7701815fd1595f5123cba0d3e3306a5b8fa"
	I1019 16:23:09.853843   19529 cri.go:89] found id: "d3273937d7efc2090dbf92878ee357891bac8e1fd1ac928ff06fa33207f8acd6"
	I1019 16:23:09.853851   19529 cri.go:89] found id: "6ef8a71fe5d3990e63bf621adc1773300aea754a4ae4ba88cb908af6033475fc"
	I1019 16:23:09.853856   19529 cri.go:89] found id: "6e2091eec84de97496ef71ec05866a5e84fc72136ebd8fba19aae91efa63d9e3"
	I1019 16:23:09.853860   19529 cri.go:89] found id: "037c0332147ce352c19f75a68693f361440e2fb38139735ad856f224c9190c1f"
	I1019 16:23:09.853865   19529 cri.go:89] found id: "65863d6eb03db19db922d886803cf53a0cb4c1aa50b354e37b1d7abdbe4cd53e"
	I1019 16:23:09.853870   19529 cri.go:89] found id: "28dca4a98797d39e429fa32ded7e16d2d4d1e675523de1c726f9a2abb9cf1ec4"
	I1019 16:23:09.853874   19529 cri.go:89] found id: "893241a14d701c6bdcba867e2fd1fe8a4cd5738e5119b7e04c7002b291849f9d"
	I1019 16:23:09.853882   19529 cri.go:89] found id: "b929bd44832f67814ec657bdb9e012b7c97cd64985f113692cbc885e53eeb5ed"
	I1019 16:23:09.853886   19529 cri.go:89] found id: "ef3b3e7a4894884f7c8bbd755c56a7700dc943a7928aa0e52f4dbdc81806fd1b"
	I1019 16:23:09.853892   19529 cri.go:89] found id: "8d61047c5353c2c688e2f172eaaf6ad4bd6c3816d4a19f2de39251e386867693"
	I1019 16:23:09.853896   19529 cri.go:89] found id: "91694d16fb2b0ece4c3071344d073973e3e529ad5d23e63ffcbb1aa13844606e"
	I1019 16:23:09.853900   19529 cri.go:89] found id: "00766cde139826cec0b8b6d979bc06db1a8e4c72cc21abb45e687eb3efc6283c"
	I1019 16:23:09.853904   19529 cri.go:89] found id: "49e88f9620ecba4c70cd7b60c353fac9c19df294f9e1cedfb43e5e9d57bb6468"
	I1019 16:23:09.853908   19529 cri.go:89] found id: "64ffa4d775be3d28668524f2c7dd8e1c6ad5eeba22fb96d610f66530c5502bfd"
	I1019 16:23:09.853912   19529 cri.go:89] found id: "7ea12626c6ada213850e1cea66bee9586ca3b56572fe37a2f1d96af64e737eb5"
	I1019 16:23:09.853916   19529 cri.go:89] found id: "75207afa634f810f84239fed0d54861fdc8bcb8c5f5a1e05bb98430d84584d65"
	I1019 16:23:09.853920   19529 cri.go:89] found id: ""
	I1019 16:23:09.853973   19529 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 16:23:09.872690   19529 out.go:203] 
	W1019 16:23:09.874235   19529 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:23:09Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:23:09Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 16:23:09.874265   19529 out.go:285] * 
	* 
	W1019 16:23:09.877837   19529 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 16:23:09.879282   19529 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-557770 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.36s)

                                                
                                    
x
+
TestAddons/parallel/CSI (44.88s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1019 16:23:04.712461    7228 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1019 16:23:04.715977    7228 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1019 16:23:04.716002    7228 kapi.go:107] duration metric: took 3.557837ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.567091ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-557770 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-557770 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-557770 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-557770 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-557770 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-557770 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-557770 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-557770 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-557770 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-557770 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-557770 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-557770 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-557770 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-557770 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-557770 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-557770 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-557770 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-557770 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-557770 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-557770 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-557770 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-557770 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-557770 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-557770 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-557770 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [17d9e5c6-3e5c-4478-87ea-8ab758a9c701] Pending
helpers_test.go:352: "task-pv-pod" [17d9e5c6-3e5c-4478-87ea-8ab758a9c701] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [17d9e5c6-3e5c-4478-87ea-8ab758a9c701] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.004016209s
addons_test.go:572: (dbg) Run:  kubectl --context addons-557770 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-557770 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-557770 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-557770 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-557770 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-557770 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-557770 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-557770 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-557770 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-557770 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [f5dab36a-3627-43be-85ec-d5a5467681ed] Pending
helpers_test.go:352: "task-pv-pod-restore" [f5dab36a-3627-43be-85ec-d5a5467681ed] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [f5dab36a-3627-43be-85ec-d5a5467681ed] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003230898s
addons_test.go:614: (dbg) Run:  kubectl --context addons-557770 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-557770 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-557770 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-557770 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-557770 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (236.489209ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 16:23:49.155284   21354 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:23:49.155460   21354 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:23:49.155472   21354 out.go:374] Setting ErrFile to fd 2...
	I1019 16:23:49.155478   21354 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:23:49.155712   21354 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3731/.minikube/bin
	I1019 16:23:49.156002   21354 mustload.go:66] Loading cluster: addons-557770
	I1019 16:23:49.156454   21354 config.go:182] Loaded profile config "addons-557770": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:23:49.156475   21354 addons.go:607] checking whether the cluster is paused
	I1019 16:23:49.156599   21354 config.go:182] Loaded profile config "addons-557770": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:23:49.156614   21354 host.go:66] Checking if "addons-557770" exists ...
	I1019 16:23:49.157028   21354 cli_runner.go:164] Run: docker container inspect addons-557770 --format={{.State.Status}}
	I1019 16:23:49.175288   21354 ssh_runner.go:195] Run: systemctl --version
	I1019 16:23:49.175354   21354 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:23:49.193703   21354 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/addons-557770/id_rsa Username:docker}
	I1019 16:23:49.290095   21354 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 16:23:49.290175   21354 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 16:23:49.320970   21354 cri.go:89] found id: "9d790d16e895919a1170fc8db9b5e7568e08763e98bd7e71211780308a212097"
	I1019 16:23:49.320990   21354 cri.go:89] found id: "5888ac56628ff93145c6aece5f1120cd8eb8decc9375b9ca12c2230ec4c1defb"
	I1019 16:23:49.320994   21354 cri.go:89] found id: "50a236ad43b3d311660fac7a7b4f30d2e7c648b9652ad78e1ba950060276f923"
	I1019 16:23:49.320997   21354 cri.go:89] found id: "05400eb2fd5eb5dab037918d1625bce04346352c5eb622cb825bc5d8be2d837e"
	I1019 16:23:49.321000   21354 cri.go:89] found id: "2e93d36349d27da10dcd3ea12fffbcf50506c87892ebdc8c2ea984ea7cc55440"
	I1019 16:23:49.321003   21354 cri.go:89] found id: "c503d7edb96f3d0cc1492dbef9c424367ec8ca4b0115a5c993d0ff8d12eedf93"
	I1019 16:23:49.321006   21354 cri.go:89] found id: "bfa49cfea4019265b1d3841ebbabe50ea8270a753eb9f5e68565515d284992d5"
	I1019 16:23:49.321008   21354 cri.go:89] found id: "aeccc8f632779e72d498e4e2c918b7701815fd1595f5123cba0d3e3306a5b8fa"
	I1019 16:23:49.321011   21354 cri.go:89] found id: "d3273937d7efc2090dbf92878ee357891bac8e1fd1ac928ff06fa33207f8acd6"
	I1019 16:23:49.321028   21354 cri.go:89] found id: "6ef8a71fe5d3990e63bf621adc1773300aea754a4ae4ba88cb908af6033475fc"
	I1019 16:23:49.321031   21354 cri.go:89] found id: "6e2091eec84de97496ef71ec05866a5e84fc72136ebd8fba19aae91efa63d9e3"
	I1019 16:23:49.321033   21354 cri.go:89] found id: "037c0332147ce352c19f75a68693f361440e2fb38139735ad856f224c9190c1f"
	I1019 16:23:49.321036   21354 cri.go:89] found id: "65863d6eb03db19db922d886803cf53a0cb4c1aa50b354e37b1d7abdbe4cd53e"
	I1019 16:23:49.321038   21354 cri.go:89] found id: "28dca4a98797d39e429fa32ded7e16d2d4d1e675523de1c726f9a2abb9cf1ec4"
	I1019 16:23:49.321041   21354 cri.go:89] found id: "893241a14d701c6bdcba867e2fd1fe8a4cd5738e5119b7e04c7002b291849f9d"
	I1019 16:23:49.321049   21354 cri.go:89] found id: "b929bd44832f67814ec657bdb9e012b7c97cd64985f113692cbc885e53eeb5ed"
	I1019 16:23:49.321054   21354 cri.go:89] found id: "ef3b3e7a4894884f7c8bbd755c56a7700dc943a7928aa0e52f4dbdc81806fd1b"
	I1019 16:23:49.321058   21354 cri.go:89] found id: "8d61047c5353c2c688e2f172eaaf6ad4bd6c3816d4a19f2de39251e386867693"
	I1019 16:23:49.321060   21354 cri.go:89] found id: "91694d16fb2b0ece4c3071344d073973e3e529ad5d23e63ffcbb1aa13844606e"
	I1019 16:23:49.321063   21354 cri.go:89] found id: "00766cde139826cec0b8b6d979bc06db1a8e4c72cc21abb45e687eb3efc6283c"
	I1019 16:23:49.321099   21354 cri.go:89] found id: "49e88f9620ecba4c70cd7b60c353fac9c19df294f9e1cedfb43e5e9d57bb6468"
	I1019 16:23:49.321107   21354 cri.go:89] found id: "64ffa4d775be3d28668524f2c7dd8e1c6ad5eeba22fb96d610f66530c5502bfd"
	I1019 16:23:49.321111   21354 cri.go:89] found id: "7ea12626c6ada213850e1cea66bee9586ca3b56572fe37a2f1d96af64e737eb5"
	I1019 16:23:49.321122   21354 cri.go:89] found id: "75207afa634f810f84239fed0d54861fdc8bcb8c5f5a1e05bb98430d84584d65"
	I1019 16:23:49.321125   21354 cri.go:89] found id: ""
	I1019 16:23:49.321162   21354 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 16:23:49.335826   21354 out.go:203] 
	W1019 16:23:49.337174   21354 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:23:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:23:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 16:23:49.337190   21354 out.go:285] * 
	* 
	W1019 16:23:49.340274   21354 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 16:23:49.341699   21354 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-557770 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-557770 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-557770 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (245.682274ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 16:23:49.394291   21415 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:23:49.394478   21415 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:23:49.394491   21415 out.go:374] Setting ErrFile to fd 2...
	I1019 16:23:49.394498   21415 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:23:49.394834   21415 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3731/.minikube/bin
	I1019 16:23:49.395227   21415 mustload.go:66] Loading cluster: addons-557770
	I1019 16:23:49.395723   21415 config.go:182] Loaded profile config "addons-557770": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:23:49.395745   21415 addons.go:607] checking whether the cluster is paused
	I1019 16:23:49.395879   21415 config.go:182] Loaded profile config "addons-557770": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:23:49.395896   21415 host.go:66] Checking if "addons-557770" exists ...
	I1019 16:23:49.396485   21415 cli_runner.go:164] Run: docker container inspect addons-557770 --format={{.State.Status}}
	I1019 16:23:49.416949   21415 ssh_runner.go:195] Run: systemctl --version
	I1019 16:23:49.417018   21415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:23:49.437786   21415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/addons-557770/id_rsa Username:docker}
	I1019 16:23:49.533896   21415 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 16:23:49.533986   21415 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 16:23:49.564630   21415 cri.go:89] found id: "9d790d16e895919a1170fc8db9b5e7568e08763e98bd7e71211780308a212097"
	I1019 16:23:49.564658   21415 cri.go:89] found id: "5888ac56628ff93145c6aece5f1120cd8eb8decc9375b9ca12c2230ec4c1defb"
	I1019 16:23:49.564663   21415 cri.go:89] found id: "50a236ad43b3d311660fac7a7b4f30d2e7c648b9652ad78e1ba950060276f923"
	I1019 16:23:49.564669   21415 cri.go:89] found id: "05400eb2fd5eb5dab037918d1625bce04346352c5eb622cb825bc5d8be2d837e"
	I1019 16:23:49.564677   21415 cri.go:89] found id: "2e93d36349d27da10dcd3ea12fffbcf50506c87892ebdc8c2ea984ea7cc55440"
	I1019 16:23:49.564691   21415 cri.go:89] found id: "c503d7edb96f3d0cc1492dbef9c424367ec8ca4b0115a5c993d0ff8d12eedf93"
	I1019 16:23:49.564697   21415 cri.go:89] found id: "bfa49cfea4019265b1d3841ebbabe50ea8270a753eb9f5e68565515d284992d5"
	I1019 16:23:49.564704   21415 cri.go:89] found id: "aeccc8f632779e72d498e4e2c918b7701815fd1595f5123cba0d3e3306a5b8fa"
	I1019 16:23:49.564709   21415 cri.go:89] found id: "d3273937d7efc2090dbf92878ee357891bac8e1fd1ac928ff06fa33207f8acd6"
	I1019 16:23:49.564717   21415 cri.go:89] found id: "6ef8a71fe5d3990e63bf621adc1773300aea754a4ae4ba88cb908af6033475fc"
	I1019 16:23:49.564722   21415 cri.go:89] found id: "6e2091eec84de97496ef71ec05866a5e84fc72136ebd8fba19aae91efa63d9e3"
	I1019 16:23:49.564727   21415 cri.go:89] found id: "037c0332147ce352c19f75a68693f361440e2fb38139735ad856f224c9190c1f"
	I1019 16:23:49.564732   21415 cri.go:89] found id: "65863d6eb03db19db922d886803cf53a0cb4c1aa50b354e37b1d7abdbe4cd53e"
	I1019 16:23:49.564736   21415 cri.go:89] found id: "28dca4a98797d39e429fa32ded7e16d2d4d1e675523de1c726f9a2abb9cf1ec4"
	I1019 16:23:49.564741   21415 cri.go:89] found id: "893241a14d701c6bdcba867e2fd1fe8a4cd5738e5119b7e04c7002b291849f9d"
	I1019 16:23:49.564755   21415 cri.go:89] found id: "b929bd44832f67814ec657bdb9e012b7c97cd64985f113692cbc885e53eeb5ed"
	I1019 16:23:49.564761   21415 cri.go:89] found id: "ef3b3e7a4894884f7c8bbd755c56a7700dc943a7928aa0e52f4dbdc81806fd1b"
	I1019 16:23:49.564765   21415 cri.go:89] found id: "8d61047c5353c2c688e2f172eaaf6ad4bd6c3816d4a19f2de39251e386867693"
	I1019 16:23:49.564767   21415 cri.go:89] found id: "91694d16fb2b0ece4c3071344d073973e3e529ad5d23e63ffcbb1aa13844606e"
	I1019 16:23:49.564770   21415 cri.go:89] found id: "00766cde139826cec0b8b6d979bc06db1a8e4c72cc21abb45e687eb3efc6283c"
	I1019 16:23:49.564774   21415 cri.go:89] found id: "49e88f9620ecba4c70cd7b60c353fac9c19df294f9e1cedfb43e5e9d57bb6468"
	I1019 16:23:49.564777   21415 cri.go:89] found id: "64ffa4d775be3d28668524f2c7dd8e1c6ad5eeba22fb96d610f66530c5502bfd"
	I1019 16:23:49.564779   21415 cri.go:89] found id: "7ea12626c6ada213850e1cea66bee9586ca3b56572fe37a2f1d96af64e737eb5"
	I1019 16:23:49.564782   21415 cri.go:89] found id: "75207afa634f810f84239fed0d54861fdc8bcb8c5f5a1e05bb98430d84584d65"
	I1019 16:23:49.564784   21415 cri.go:89] found id: ""
	I1019 16:23:49.564823   21415 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 16:23:49.580371   21415 out.go:203] 
	W1019 16:23:49.581868   21415 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:23:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:23:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 16:23:49.581900   21415 out.go:285] * 
	* 
	W1019 16:23:49.586367   21415 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 16:23:49.588059   21415 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-557770 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (44.88s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.67s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-557770 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-557770 --alsologtostderr -v=1: exit status 11 (253.362091ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 16:22:56.641102   17369 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:22:56.641458   17369 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:22:56.641473   17369 out.go:374] Setting ErrFile to fd 2...
	I1019 16:22:56.641479   17369 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:22:56.641757   17369 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3731/.minikube/bin
	I1019 16:22:56.642194   17369 mustload.go:66] Loading cluster: addons-557770
	I1019 16:22:56.642684   17369 config.go:182] Loaded profile config "addons-557770": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:22:56.642713   17369 addons.go:607] checking whether the cluster is paused
	I1019 16:22:56.642860   17369 config.go:182] Loaded profile config "addons-557770": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:22:56.642889   17369 host.go:66] Checking if "addons-557770" exists ...
	I1019 16:22:56.643464   17369 cli_runner.go:164] Run: docker container inspect addons-557770 --format={{.State.Status}}
	I1019 16:22:56.663260   17369 ssh_runner.go:195] Run: systemctl --version
	I1019 16:22:56.663327   17369 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:22:56.683482   17369 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/addons-557770/id_rsa Username:docker}
	I1019 16:22:56.782291   17369 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 16:22:56.782381   17369 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 16:22:56.814533   17369 cri.go:89] found id: "9d790d16e895919a1170fc8db9b5e7568e08763e98bd7e71211780308a212097"
	I1019 16:22:56.814554   17369 cri.go:89] found id: "5888ac56628ff93145c6aece5f1120cd8eb8decc9375b9ca12c2230ec4c1defb"
	I1019 16:22:56.814558   17369 cri.go:89] found id: "50a236ad43b3d311660fac7a7b4f30d2e7c648b9652ad78e1ba950060276f923"
	I1019 16:22:56.814562   17369 cri.go:89] found id: "05400eb2fd5eb5dab037918d1625bce04346352c5eb622cb825bc5d8be2d837e"
	I1019 16:22:56.814564   17369 cri.go:89] found id: "2e93d36349d27da10dcd3ea12fffbcf50506c87892ebdc8c2ea984ea7cc55440"
	I1019 16:22:56.814569   17369 cri.go:89] found id: "c503d7edb96f3d0cc1492dbef9c424367ec8ca4b0115a5c993d0ff8d12eedf93"
	I1019 16:22:56.814573   17369 cri.go:89] found id: "bfa49cfea4019265b1d3841ebbabe50ea8270a753eb9f5e68565515d284992d5"
	I1019 16:22:56.814577   17369 cri.go:89] found id: "aeccc8f632779e72d498e4e2c918b7701815fd1595f5123cba0d3e3306a5b8fa"
	I1019 16:22:56.814580   17369 cri.go:89] found id: "d3273937d7efc2090dbf92878ee357891bac8e1fd1ac928ff06fa33207f8acd6"
	I1019 16:22:56.814591   17369 cri.go:89] found id: "6ef8a71fe5d3990e63bf621adc1773300aea754a4ae4ba88cb908af6033475fc"
	I1019 16:22:56.814595   17369 cri.go:89] found id: "6e2091eec84de97496ef71ec05866a5e84fc72136ebd8fba19aae91efa63d9e3"
	I1019 16:22:56.814599   17369 cri.go:89] found id: "037c0332147ce352c19f75a68693f361440e2fb38139735ad856f224c9190c1f"
	I1019 16:22:56.814603   17369 cri.go:89] found id: "65863d6eb03db19db922d886803cf53a0cb4c1aa50b354e37b1d7abdbe4cd53e"
	I1019 16:22:56.814607   17369 cri.go:89] found id: "28dca4a98797d39e429fa32ded7e16d2d4d1e675523de1c726f9a2abb9cf1ec4"
	I1019 16:22:56.814611   17369 cri.go:89] found id: "893241a14d701c6bdcba867e2fd1fe8a4cd5738e5119b7e04c7002b291849f9d"
	I1019 16:22:56.814617   17369 cri.go:89] found id: "b929bd44832f67814ec657bdb9e012b7c97cd64985f113692cbc885e53eeb5ed"
	I1019 16:22:56.814622   17369 cri.go:89] found id: "ef3b3e7a4894884f7c8bbd755c56a7700dc943a7928aa0e52f4dbdc81806fd1b"
	I1019 16:22:56.814628   17369 cri.go:89] found id: "8d61047c5353c2c688e2f172eaaf6ad4bd6c3816d4a19f2de39251e386867693"
	I1019 16:22:56.814632   17369 cri.go:89] found id: "91694d16fb2b0ece4c3071344d073973e3e529ad5d23e63ffcbb1aa13844606e"
	I1019 16:22:56.814635   17369 cri.go:89] found id: "00766cde139826cec0b8b6d979bc06db1a8e4c72cc21abb45e687eb3efc6283c"
	I1019 16:22:56.814637   17369 cri.go:89] found id: "49e88f9620ecba4c70cd7b60c353fac9c19df294f9e1cedfb43e5e9d57bb6468"
	I1019 16:22:56.814640   17369 cri.go:89] found id: "64ffa4d775be3d28668524f2c7dd8e1c6ad5eeba22fb96d610f66530c5502bfd"
	I1019 16:22:56.814643   17369 cri.go:89] found id: "7ea12626c6ada213850e1cea66bee9586ca3b56572fe37a2f1d96af64e737eb5"
	I1019 16:22:56.814645   17369 cri.go:89] found id: "75207afa634f810f84239fed0d54861fdc8bcb8c5f5a1e05bb98430d84584d65"
	I1019 16:22:56.814647   17369 cri.go:89] found id: ""
	I1019 16:22:56.814702   17369 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 16:22:56.830278   17369 out.go:203] 
	W1019 16:22:56.832050   17369 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:22:56Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:22:56Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 16:22:56.832111   17369 out.go:285] * 
	* 
	W1019 16:22:56.836170   17369 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 16:22:56.838259   17369 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-557770 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-557770
helpers_test.go:243: (dbg) docker inspect addons-557770:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e9d7c66cdc0d595285ce592a3326e3fd70a592e77145a07f4ae472ccf14f076f",
	        "Created": "2025-10-19T16:21:01.416576155Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 9251,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T16:21:01.449249778Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/e9d7c66cdc0d595285ce592a3326e3fd70a592e77145a07f4ae472ccf14f076f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e9d7c66cdc0d595285ce592a3326e3fd70a592e77145a07f4ae472ccf14f076f/hostname",
	        "HostsPath": "/var/lib/docker/containers/e9d7c66cdc0d595285ce592a3326e3fd70a592e77145a07f4ae472ccf14f076f/hosts",
	        "LogPath": "/var/lib/docker/containers/e9d7c66cdc0d595285ce592a3326e3fd70a592e77145a07f4ae472ccf14f076f/e9d7c66cdc0d595285ce592a3326e3fd70a592e77145a07f4ae472ccf14f076f-json.log",
	        "Name": "/addons-557770",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-557770:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-557770",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e9d7c66cdc0d595285ce592a3326e3fd70a592e77145a07f4ae472ccf14f076f",
	                "LowerDir": "/var/lib/docker/overlay2/5ca175b9498e0f07cca83ff2f3379fedc9eb67217735198daa727f179161e09b-init/diff:/var/lib/docker/overlay2/0516a6de822f96a429e000bb6523437ca93a66a6aecaba561c6abf45d0d1defe/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5ca175b9498e0f07cca83ff2f3379fedc9eb67217735198daa727f179161e09b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5ca175b9498e0f07cca83ff2f3379fedc9eb67217735198daa727f179161e09b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5ca175b9498e0f07cca83ff2f3379fedc9eb67217735198daa727f179161e09b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-557770",
	                "Source": "/var/lib/docker/volumes/addons-557770/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-557770",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-557770",
	                "name.minikube.sigs.k8s.io": "addons-557770",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cb12be6277a9b3b2d91b7c1033229f388039b0b6b3aefe597e1caaadd677c015",
	            "SandboxKey": "/var/run/docker/netns/cb12be6277a9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-557770": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "de:7a:83:c2:9e:43",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fa72c0f0c5a3e65694960e1b32d75351c671796cc32b6ceb00202dcb25d58472",
	                    "EndpointID": "34f039e463948afb9436de9842a2de7b6c75b6370561f32f3a140ab1933e5b10",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-557770",
	                        "e9d7c66cdc0d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-557770 -n addons-557770
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-557770 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-557770 logs -n 25: (1.218168325s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-018429 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-018429   │ jenkins │ v1.37.0 │ 19 Oct 25 16:20 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 19 Oct 25 16:20 UTC │ 19 Oct 25 16:20 UTC │
	│ delete  │ -p download-only-018429                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-018429   │ jenkins │ v1.37.0 │ 19 Oct 25 16:20 UTC │ 19 Oct 25 16:20 UTC │
	│ start   │ -o=json --download-only -p download-only-870641 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-870641   │ jenkins │ v1.37.0 │ 19 Oct 25 16:20 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 19 Oct 25 16:20 UTC │ 19 Oct 25 16:20 UTC │
	│ delete  │ -p download-only-870641                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-870641   │ jenkins │ v1.37.0 │ 19 Oct 25 16:20 UTC │ 19 Oct 25 16:20 UTC │
	│ delete  │ -p download-only-018429                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-018429   │ jenkins │ v1.37.0 │ 19 Oct 25 16:20 UTC │ 19 Oct 25 16:20 UTC │
	│ delete  │ -p download-only-870641                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-870641   │ jenkins │ v1.37.0 │ 19 Oct 25 16:20 UTC │ 19 Oct 25 16:20 UTC │
	│ start   │ --download-only -p download-docker-110482 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-110482 │ jenkins │ v1.37.0 │ 19 Oct 25 16:20 UTC │                     │
	│ delete  │ -p download-docker-110482                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-110482 │ jenkins │ v1.37.0 │ 19 Oct 25 16:20 UTC │ 19 Oct 25 16:20 UTC │
	│ start   │ --download-only -p binary-mirror-444864 --alsologtostderr --binary-mirror http://127.0.0.1:37593 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-444864   │ jenkins │ v1.37.0 │ 19 Oct 25 16:20 UTC │                     │
	│ delete  │ -p binary-mirror-444864                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-444864   │ jenkins │ v1.37.0 │ 19 Oct 25 16:20 UTC │ 19 Oct 25 16:20 UTC │
	│ addons  │ disable dashboard -p addons-557770                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-557770          │ jenkins │ v1.37.0 │ 19 Oct 25 16:20 UTC │                     │
	│ addons  │ enable dashboard -p addons-557770                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-557770          │ jenkins │ v1.37.0 │ 19 Oct 25 16:20 UTC │                     │
	│ start   │ -p addons-557770 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-557770          │ jenkins │ v1.37.0 │ 19 Oct 25 16:20 UTC │ 19 Oct 25 16:22 UTC │
	│ addons  │ addons-557770 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-557770          │ jenkins │ v1.37.0 │ 19 Oct 25 16:22 UTC │                     │
	│ addons  │ addons-557770 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-557770          │ jenkins │ v1.37.0 │ 19 Oct 25 16:22 UTC │                     │
	│ addons  │ enable headlamp -p addons-557770 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-557770          │ jenkins │ v1.37.0 │ 19 Oct 25 16:22 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 16:20:37
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 16:20:37.001527    8599 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:20:37.001638    8599 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:20:37.001643    8599 out.go:374] Setting ErrFile to fd 2...
	I1019 16:20:37.001647    8599 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:20:37.001854    8599 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3731/.minikube/bin
	I1019 16:20:37.002393    8599 out.go:368] Setting JSON to false
	I1019 16:20:37.003191    8599 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":183,"bootTime":1760890654,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 16:20:37.003281    8599 start.go:143] virtualization: kvm guest
	I1019 16:20:37.005248    8599 out.go:179] * [addons-557770] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 16:20:37.006659    8599 notify.go:221] Checking for updates...
	I1019 16:20:37.006723    8599 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 16:20:37.008491    8599 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 16:20:37.010236    8599 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-3731/kubeconfig
	I1019 16:20:37.011864    8599 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-3731/.minikube
	I1019 16:20:37.013200    8599 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 16:20:37.014635    8599 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 16:20:37.016001    8599 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 16:20:37.040252    8599 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1019 16:20:37.040410    8599 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 16:20:37.101098    8599 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-10-19 16:20:37.089254389 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 16:20:37.101199    8599 docker.go:319] overlay module found
	I1019 16:20:37.102914    8599 out.go:179] * Using the docker driver based on user configuration
	I1019 16:20:37.104249    8599 start.go:309] selected driver: docker
	I1019 16:20:37.104264    8599 start.go:930] validating driver "docker" against <nil>
	I1019 16:20:37.104276    8599 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 16:20:37.104878    8599 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 16:20:37.161990    8599 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-10-19 16:20:37.151615083 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 16:20:37.162190    8599 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1019 16:20:37.162402    8599 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 16:20:37.164311    8599 out.go:179] * Using Docker driver with root privileges
	I1019 16:20:37.165525    8599 cni.go:84] Creating CNI manager for ""
	I1019 16:20:37.165585    8599 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 16:20:37.165595    8599 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1019 16:20:37.165690    8599 start.go:353] cluster config:
	{Name:addons-557770 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-557770 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1019 16:20:37.167138    8599 out.go:179] * Starting "addons-557770" primary control-plane node in "addons-557770" cluster
	I1019 16:20:37.168420    8599 cache.go:124] Beginning downloading kic base image for docker with crio
	I1019 16:20:37.169584    8599 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 16:20:37.170735    8599 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 16:20:37.170776    8599 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-3731/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1019 16:20:37.170783    8599 cache.go:59] Caching tarball of preloaded images
	I1019 16:20:37.170858    8599 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 16:20:37.170928    8599 preload.go:233] Found /home/jenkins/minikube-integration/21683-3731/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1019 16:20:37.170941    8599 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 16:20:37.171275    8599 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/config.json ...
	I1019 16:20:37.171300    8599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/config.json: {Name:mk0b880f81c44948ba924d3b86e3229bc276fcc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:20:37.187758    8599 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1019 16:20:37.187924    8599 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory
	I1019 16:20:37.187944    8599 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory, skipping pull
	I1019 16:20:37.187949    8599 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in cache, skipping pull
	I1019 16:20:37.187956    8599 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 as a tarball
	I1019 16:20:37.187964    8599 cache.go:166] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from local cache
	I1019 16:20:49.559891    8599 cache.go:168] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from cached tarball
	I1019 16:20:49.559937    8599 cache.go:233] Successfully downloaded all kic artifacts
	I1019 16:20:49.560008    8599 start.go:360] acquireMachinesLock for addons-557770: {Name:mkd8c0d521d8e4e2b3309f4cceb29802c8ff5ea5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 16:20:49.560147    8599 start.go:364] duration metric: took 118.204µs to acquireMachinesLock for "addons-557770"
	I1019 16:20:49.560181    8599 start.go:93] Provisioning new machine with config: &{Name:addons-557770 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-557770 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 16:20:49.560268    8599 start.go:125] createHost starting for "" (driver="docker")
	I1019 16:20:49.562195    8599 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1019 16:20:49.562417    8599 start.go:159] libmachine.API.Create for "addons-557770" (driver="docker")
	I1019 16:20:49.562451    8599 client.go:171] LocalClient.Create starting
	I1019 16:20:49.562566    8599 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem
	I1019 16:20:49.664253    8599 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/cert.pem
	I1019 16:20:49.844256    8599 cli_runner.go:164] Run: docker network inspect addons-557770 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1019 16:20:49.861774    8599 cli_runner.go:211] docker network inspect addons-557770 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1019 16:20:49.861858    8599 network_create.go:284] running [docker network inspect addons-557770] to gather additional debugging logs...
	I1019 16:20:49.861883    8599 cli_runner.go:164] Run: docker network inspect addons-557770
	W1019 16:20:49.879492    8599 cli_runner.go:211] docker network inspect addons-557770 returned with exit code 1
	I1019 16:20:49.879519    8599 network_create.go:287] error running [docker network inspect addons-557770]: docker network inspect addons-557770: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-557770 not found
	I1019 16:20:49.879530    8599 network_create.go:289] output of [docker network inspect addons-557770]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-557770 not found
	
	** /stderr **
	I1019 16:20:49.879612    8599 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 16:20:49.897397    8599 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d9fd90}
	I1019 16:20:49.897430    8599 network_create.go:124] attempt to create docker network addons-557770 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1019 16:20:49.897470    8599 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-557770 addons-557770
	I1019 16:20:49.956189    8599 network_create.go:108] docker network addons-557770 192.168.49.0/24 created
	I1019 16:20:49.956236    8599 kic.go:121] calculated static IP "192.168.49.2" for the "addons-557770" container
	I1019 16:20:49.956300    8599 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1019 16:20:49.973143    8599 cli_runner.go:164] Run: docker volume create addons-557770 --label name.minikube.sigs.k8s.io=addons-557770 --label created_by.minikube.sigs.k8s.io=true
	I1019 16:20:49.992160    8599 oci.go:103] Successfully created a docker volume addons-557770
	I1019 16:20:49.992248    8599 cli_runner.go:164] Run: docker run --rm --name addons-557770-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-557770 --entrypoint /usr/bin/test -v addons-557770:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1019 16:20:56.849519    8599 cli_runner.go:217] Completed: docker run --rm --name addons-557770-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-557770 --entrypoint /usr/bin/test -v addons-557770:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib: (6.857232984s)
	I1019 16:20:56.849551    8599 oci.go:107] Successfully prepared a docker volume addons-557770
	I1019 16:20:56.849601    8599 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 16:20:56.849630    8599 kic.go:194] Starting extracting preloaded images to volume ...
	I1019 16:20:56.849688    8599 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-3731/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-557770:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1019 16:21:01.341030    8599 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-3731/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-557770:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.491302405s)
	I1019 16:21:01.341060    8599 kic.go:203] duration metric: took 4.491428098s to extract preloaded images to volume ...
	W1019 16:21:01.341184    8599 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1019 16:21:01.341228    8599 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1019 16:21:01.341285    8599 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1019 16:21:01.400081    8599 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-557770 --name addons-557770 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-557770 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-557770 --network addons-557770 --ip 192.168.49.2 --volume addons-557770:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1019 16:21:01.697891    8599 cli_runner.go:164] Run: docker container inspect addons-557770 --format={{.State.Running}}
	I1019 16:21:01.719050    8599 cli_runner.go:164] Run: docker container inspect addons-557770 --format={{.State.Status}}
	I1019 16:21:01.738178    8599 cli_runner.go:164] Run: docker exec addons-557770 stat /var/lib/dpkg/alternatives/iptables
	I1019 16:21:01.792492    8599 oci.go:144] the created container "addons-557770" has a running status.
	I1019 16:21:01.792526    8599 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-3731/.minikube/machines/addons-557770/id_rsa...
	I1019 16:21:01.997323    8599 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-3731/.minikube/machines/addons-557770/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1019 16:21:02.033794    8599 cli_runner.go:164] Run: docker container inspect addons-557770 --format={{.State.Status}}
	I1019 16:21:02.057920    8599 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1019 16:21:02.057947    8599 kic_runner.go:114] Args: [docker exec --privileged addons-557770 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1019 16:21:02.114224    8599 cli_runner.go:164] Run: docker container inspect addons-557770 --format={{.State.Status}}
	I1019 16:21:02.134225    8599 machine.go:94] provisionDockerMachine start ...
	I1019 16:21:02.134328    8599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:21:02.153843    8599 main.go:143] libmachine: Using SSH client type: native
	I1019 16:21:02.154133    8599 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1019 16:21:02.154152    8599 main.go:143] libmachine: About to run SSH command:
	hostname
	I1019 16:21:02.289964    8599 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-557770
	
	I1019 16:21:02.289999    8599 ubuntu.go:182] provisioning hostname "addons-557770"
	I1019 16:21:02.290086    8599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:21:02.308125    8599 main.go:143] libmachine: Using SSH client type: native
	I1019 16:21:02.308371    8599 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1019 16:21:02.308386    8599 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-557770 && echo "addons-557770" | sudo tee /etc/hostname
	I1019 16:21:02.453445    8599 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-557770
	
	I1019 16:21:02.453519    8599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:21:02.471453    8599 main.go:143] libmachine: Using SSH client type: native
	I1019 16:21:02.471701    8599 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1019 16:21:02.471728    8599 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-557770' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-557770/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-557770' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 16:21:02.604965    8599 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1019 16:21:02.605010    8599 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-3731/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-3731/.minikube}
	I1019 16:21:02.605044    8599 ubuntu.go:190] setting up certificates
	I1019 16:21:02.605057    8599 provision.go:84] configureAuth start
	I1019 16:21:02.605124    8599 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-557770
	I1019 16:21:02.623212    8599 provision.go:143] copyHostCerts
	I1019 16:21:02.623301    8599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-3731/.minikube/ca.pem (1082 bytes)
	I1019 16:21:02.623432    8599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-3731/.minikube/cert.pem (1123 bytes)
	I1019 16:21:02.623515    8599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-3731/.minikube/key.pem (1679 bytes)
	I1019 16:21:02.623578    8599 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-3731/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca-key.pem org=jenkins.addons-557770 san=[127.0.0.1 192.168.49.2 addons-557770 localhost minikube]
	I1019 16:21:03.287724    8599 provision.go:177] copyRemoteCerts
	I1019 16:21:03.287784    8599 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 16:21:03.287817    8599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:21:03.306256    8599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/addons-557770/id_rsa Username:docker}
	I1019 16:21:03.402262    8599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1019 16:21:03.421832    8599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1019 16:21:03.439279    8599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 16:21:03.456563    8599 provision.go:87] duration metric: took 851.495239ms to configureAuth
	I1019 16:21:03.456591    8599 ubuntu.go:206] setting minikube options for container-runtime
	I1019 16:21:03.456795    8599 config.go:182] Loaded profile config "addons-557770": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:21:03.456897    8599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:21:03.474939    8599 main.go:143] libmachine: Using SSH client type: native
	I1019 16:21:03.475169    8599 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1019 16:21:03.475189    8599 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 16:21:03.723061    8599 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 16:21:03.723101    8599 machine.go:97] duration metric: took 1.58885289s to provisionDockerMachine
	I1019 16:21:03.723114    8599 client.go:174] duration metric: took 14.160654765s to LocalClient.Create
	I1019 16:21:03.723133    8599 start.go:167] duration metric: took 14.160717768s to libmachine.API.Create "addons-557770"
	I1019 16:21:03.723153    8599 start.go:293] postStartSetup for "addons-557770" (driver="docker")
	I1019 16:21:03.723164    8599 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 16:21:03.723222    8599 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 16:21:03.723258    8599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:21:03.741775    8599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/addons-557770/id_rsa Username:docker}
	I1019 16:21:03.841359    8599 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 16:21:03.845057    8599 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 16:21:03.845100    8599 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 16:21:03.845113    8599 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-3731/.minikube/addons for local assets ...
	I1019 16:21:03.845177    8599 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-3731/.minikube/files for local assets ...
	I1019 16:21:03.845203    8599 start.go:296] duration metric: took 122.044139ms for postStartSetup
	I1019 16:21:03.845531    8599 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-557770
	I1019 16:21:03.863610    8599 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/config.json ...
	I1019 16:21:03.863926    8599 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 16:21:03.863978    8599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:21:03.881731    8599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/addons-557770/id_rsa Username:docker}
	I1019 16:21:03.975202    8599 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 16:21:03.979826    8599 start.go:128] duration metric: took 14.419541469s to createHost
	I1019 16:21:03.979856    8599 start.go:83] releasing machines lock for "addons-557770", held for 14.419693478s
	I1019 16:21:03.979929    8599 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-557770
	I1019 16:21:03.997689    8599 ssh_runner.go:195] Run: cat /version.json
	I1019 16:21:03.997737    8599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:21:03.997782    8599 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 16:21:03.997849    8599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:21:04.017608    8599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/addons-557770/id_rsa Username:docker}
	I1019 16:21:04.017960    8599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/addons-557770/id_rsa Username:docker}
	I1019 16:21:04.169611    8599 ssh_runner.go:195] Run: systemctl --version
	I1019 16:21:04.176363    8599 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 16:21:04.211541    8599 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 16:21:04.216378    8599 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 16:21:04.216447    8599 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 16:21:04.243460    8599 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1019 16:21:04.243488    8599 start.go:496] detecting cgroup driver to use...
	I1019 16:21:04.243525    8599 detect.go:190] detected "systemd" cgroup driver on host os
	I1019 16:21:04.243579    8599 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 16:21:04.259311    8599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 16:21:04.272223    8599 docker.go:218] disabling cri-docker service (if available) ...
	I1019 16:21:04.272282    8599 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 16:21:04.288861    8599 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 16:21:04.306562    8599 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 16:21:04.389940    8599 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 16:21:04.476397    8599 docker.go:234] disabling docker service ...
	I1019 16:21:04.476473    8599 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 16:21:04.494901    8599 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 16:21:04.508316    8599 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 16:21:04.596295    8599 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 16:21:04.677663    8599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 16:21:04.690649    8599 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 16:21:04.705705    8599 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 16:21:04.705772    8599 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 16:21:04.716248    8599 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1019 16:21:04.716318    8599 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 16:21:04.725596    8599 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 16:21:04.734626    8599 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 16:21:04.743880    8599 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 16:21:04.752558    8599 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 16:21:04.762018    8599 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 16:21:04.775951    8599 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 16:21:04.784950    8599 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 16:21:04.792686    8599 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1019 16:21:04.792738    8599 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1019 16:21:04.805905    8599 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 16:21:04.814179    8599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 16:21:04.888360    8599 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 16:21:04.989451    8599 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 16:21:04.989558    8599 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 16:21:04.993651    8599 start.go:564] Will wait 60s for crictl version
	I1019 16:21:04.993722    8599 ssh_runner.go:195] Run: which crictl
	I1019 16:21:04.997411    8599 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 16:21:05.021517    8599 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 16:21:05.021636    8599 ssh_runner.go:195] Run: crio --version
	I1019 16:21:05.049171    8599 ssh_runner.go:195] Run: crio --version
	I1019 16:21:05.078433    8599 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1019 16:21:05.079880    8599 cli_runner.go:164] Run: docker network inspect addons-557770 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 16:21:05.097340    8599 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1019 16:21:05.101392    8599 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 16:21:05.112285    8599 kubeadm.go:884] updating cluster {Name:addons-557770 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-557770 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 16:21:05.112418    8599 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 16:21:05.112468    8599 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 16:21:05.144315    8599 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 16:21:05.144339    8599 crio.go:433] Images already preloaded, skipping extraction
	I1019 16:21:05.144411    8599 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 16:21:05.170024    8599 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 16:21:05.170047    8599 cache_images.go:86] Images are preloaded, skipping loading
	I1019 16:21:05.170055    8599 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1019 16:21:05.170162    8599 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-557770 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-557770 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 16:21:05.170239    8599 ssh_runner.go:195] Run: crio config
	I1019 16:21:05.215127    8599 cni.go:84] Creating CNI manager for ""
	I1019 16:21:05.215151    8599 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 16:21:05.215167    8599 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 16:21:05.215187    8599 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-557770 NodeName:addons-557770 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 16:21:05.215323    8599 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-557770"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 16:21:05.215378    8599 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 16:21:05.223279    8599 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 16:21:05.223344    8599 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 16:21:05.231525    8599 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1019 16:21:05.244464    8599 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 16:21:05.259865    8599 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1019 16:21:05.272922    8599 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1019 16:21:05.276762    8599 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 16:21:05.287045    8599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 16:21:05.362385    8599 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 16:21:05.388668    8599 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770 for IP: 192.168.49.2
	I1019 16:21:05.388697    8599 certs.go:195] generating shared ca certs ...
	I1019 16:21:05.388719    8599 certs.go:227] acquiring lock for ca certs: {Name:mk4c4a2dfd94a54e3626e99fce4b6f5183eeaf4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:21:05.388856    8599 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-3731/.minikube/ca.key
	I1019 16:21:05.763533    8599 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-3731/.minikube/ca.crt ...
	I1019 16:21:05.763564    8599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/ca.crt: {Name:mk44f8e3a76dd83cca35327978860564665e7c28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:21:05.763742    8599 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-3731/.minikube/ca.key ...
	I1019 16:21:05.763759    8599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/ca.key: {Name:mk431f409d1be8f924b8d1e3de8f01ef81484ff1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:21:05.763837    8599 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-3731/.minikube/proxy-client-ca.key
	I1019 16:21:06.038748    8599 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-3731/.minikube/proxy-client-ca.crt ...
	I1019 16:21:06.038784    8599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/proxy-client-ca.crt: {Name:mk366c71806b79180d7079a88d65e6419023392d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:21:06.038955    8599 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-3731/.minikube/proxy-client-ca.key ...
	I1019 16:21:06.038967    8599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/proxy-client-ca.key: {Name:mk855f6d3642997c9f92dc72ec5c319a8fccbf7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:21:06.039040    8599 certs.go:257] generating profile certs ...
	I1019 16:21:06.039118    8599 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/client.key
	I1019 16:21:06.039139    8599 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/client.crt with IP's: []
	I1019 16:21:06.247192    8599 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/client.crt ...
	I1019 16:21:06.247222    8599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/client.crt: {Name:mk438331dfa0d6b49c8f56c3992fd1b0c789d59a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:21:06.247394    8599 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/client.key ...
	I1019 16:21:06.247406    8599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/client.key: {Name:mkc6ed2572f7106eb844bc591483dde318b77cb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:21:06.247485    8599 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/apiserver.key.e8f8bc08
	I1019 16:21:06.247503    8599 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/apiserver.crt.e8f8bc08 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1019 16:21:06.452505    8599 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/apiserver.crt.e8f8bc08 ...
	I1019 16:21:06.452537    8599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/apiserver.crt.e8f8bc08: {Name:mk61b8e1a223d3350c0d71f06d27dd73bbc319e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:21:06.452713    8599 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/apiserver.key.e8f8bc08 ...
	I1019 16:21:06.452725    8599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/apiserver.key.e8f8bc08: {Name:mkbbb235b94daaa2d21108ad873fa041c1e4d991 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:21:06.452806    8599 certs.go:382] copying /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/apiserver.crt.e8f8bc08 -> /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/apiserver.crt
	I1019 16:21:06.452893    8599 certs.go:386] copying /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/apiserver.key.e8f8bc08 -> /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/apiserver.key
	I1019 16:21:06.452940    8599 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/proxy-client.key
	I1019 16:21:06.452958    8599 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/proxy-client.crt with IP's: []
	I1019 16:21:06.899042    8599 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/proxy-client.crt ...
	I1019 16:21:06.899081    8599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/proxy-client.crt: {Name:mk68d1d4e27c342b829886fdb40b43beef811c1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:21:06.899247    8599 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/proxy-client.key ...
	I1019 16:21:06.899258    8599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/proxy-client.key: {Name:mke44c82e24a4c54aecb289324ed9b282d52ebad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:21:06.899454    8599 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca-key.pem (1675 bytes)
	I1019 16:21:06.899489    8599 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem (1082 bytes)
	I1019 16:21:06.899512    8599 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/cert.pem (1123 bytes)
	I1019 16:21:06.899540    8599 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/key.pem (1679 bytes)
	I1019 16:21:06.900130    8599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 16:21:06.918435    8599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1019 16:21:06.936440    8599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 16:21:06.954843    8599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1019 16:21:06.973869    8599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1019 16:21:06.991291    8599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1019 16:21:07.009288    8599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 16:21:07.027274    8599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1019 16:21:07.045284    8599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 16:21:07.064711    8599 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 16:21:07.077962    8599 ssh_runner.go:195] Run: openssl version
	I1019 16:21:07.084166    8599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 16:21:07.097092    8599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 16:21:07.101047    8599 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 16:21 /usr/share/ca-certificates/minikubeCA.pem
	I1019 16:21:07.101118    8599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 16:21:07.135518    8599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 16:21:07.144765    8599 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 16:21:07.148436    8599 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1019 16:21:07.148482    8599 kubeadm.go:401] StartCluster: {Name:addons-557770 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-557770 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 16:21:07.148556    8599 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 16:21:07.148599    8599 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 16:21:07.174719    8599 cri.go:89] found id: ""
	I1019 16:21:07.174782    8599 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 16:21:07.182820    8599 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1019 16:21:07.190899    8599 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1019 16:21:07.190969    8599 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1019 16:21:07.199000    8599 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1019 16:21:07.199018    8599 kubeadm.go:158] found existing configuration files:
	
	I1019 16:21:07.199091    8599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1019 16:21:07.206784    8599 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1019 16:21:07.206838    8599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1019 16:21:07.214703    8599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1019 16:21:07.222641    8599 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1019 16:21:07.222727    8599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1019 16:21:07.230327    8599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1019 16:21:07.237984    8599 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1019 16:21:07.238044    8599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1019 16:21:07.245796    8599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1019 16:21:07.253554    8599 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1019 16:21:07.253605    8599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1019 16:21:07.260985    8599 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1019 16:21:07.296707    8599 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1019 16:21:07.296783    8599 kubeadm.go:319] [preflight] Running pre-flight checks
	I1019 16:21:07.318802    8599 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1019 16:21:07.318895    8599 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1041-gcp
	I1019 16:21:07.318944    8599 kubeadm.go:319] OS: Linux
	I1019 16:21:07.319015    8599 kubeadm.go:319] CGROUPS_CPU: enabled
	I1019 16:21:07.319057    8599 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1019 16:21:07.319140    8599 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1019 16:21:07.319186    8599 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1019 16:21:07.319225    8599 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1019 16:21:07.319263    8599 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1019 16:21:07.319363    8599 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1019 16:21:07.319429    8599 kubeadm.go:319] CGROUPS_IO: enabled
	I1019 16:21:07.376452    8599 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1019 16:21:07.376590    8599 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1019 16:21:07.376746    8599 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1019 16:21:07.386380    8599 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1019 16:21:07.388411    8599 out.go:252]   - Generating certificates and keys ...
	I1019 16:21:07.388535    8599 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1019 16:21:07.388620    8599 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1019 16:21:07.507137    8599 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1019 16:21:08.012185    8599 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1019 16:21:08.221709    8599 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1019 16:21:08.319217    8599 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1019 16:21:08.509120    8599 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1019 16:21:08.509293    8599 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-557770 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1019 16:21:08.828827    8599 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1019 16:21:08.829029    8599 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-557770 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1019 16:21:09.159860    8599 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1019 16:21:09.572535    8599 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1019 16:21:09.965134    8599 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1019 16:21:09.965258    8599 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1019 16:21:10.073234    8599 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1019 16:21:10.288320    8599 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1019 16:21:10.436040    8599 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1019 16:21:10.715794    8599 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1019 16:21:10.818023    8599 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1019 16:21:10.818501    8599 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1019 16:21:10.823565    8599 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1019 16:21:10.825188    8599 out.go:252]   - Booting up control plane ...
	I1019 16:21:10.825299    8599 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1019 16:21:10.825388    8599 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1019 16:21:10.826113    8599 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1019 16:21:10.839701    8599 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1019 16:21:10.839885    8599 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1019 16:21:10.846350    8599 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1019 16:21:10.846474    8599 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1019 16:21:10.846519    8599 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1019 16:21:10.944926    8599 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1019 16:21:10.945113    8599 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1019 16:21:11.446580    8599 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.871912ms
	I1019 16:21:11.449234    8599 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1019 16:21:11.449391    8599 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1019 16:21:11.449519    8599 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1019 16:21:11.449622    8599 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1019 16:21:12.949061    8599 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.499746023s
	I1019 16:21:13.683804    8599 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.234288977s
	I1019 16:21:15.450507    8599 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001247285s
	I1019 16:21:15.462026    8599 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1019 16:21:15.472453    8599 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1019 16:21:15.481577    8599 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1019 16:21:15.481837    8599 kubeadm.go:319] [mark-control-plane] Marking the node addons-557770 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1019 16:21:15.489415    8599 kubeadm.go:319] [bootstrap-token] Using token: 5153m7.ghqmp7zdo9wx0usq
	I1019 16:21:15.490639    8599 out.go:252]   - Configuring RBAC rules ...
	I1019 16:21:15.490779    8599 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1019 16:21:15.494010    8599 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1019 16:21:15.499759    8599 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1019 16:21:15.503047    8599 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1019 16:21:15.506182    8599 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1019 16:21:15.509626    8599 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1019 16:21:15.855970    8599 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1019 16:21:16.273537    8599 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1019 16:21:16.856153    8599 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1019 16:21:16.857106    8599 kubeadm.go:319] 
	I1019 16:21:16.857180    8599 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1019 16:21:16.857189    8599 kubeadm.go:319] 
	I1019 16:21:16.857253    8599 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1019 16:21:16.857279    8599 kubeadm.go:319] 
	I1019 16:21:16.857327    8599 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1019 16:21:16.857407    8599 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1019 16:21:16.857504    8599 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1019 16:21:16.857522    8599 kubeadm.go:319] 
	I1019 16:21:16.857599    8599 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1019 16:21:16.857608    8599 kubeadm.go:319] 
	I1019 16:21:16.857678    8599 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1019 16:21:16.857688    8599 kubeadm.go:319] 
	I1019 16:21:16.857782    8599 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1019 16:21:16.857891    8599 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1019 16:21:16.857978    8599 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1019 16:21:16.857988    8599 kubeadm.go:319] 
	I1019 16:21:16.858125    8599 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1019 16:21:16.858220    8599 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1019 16:21:16.858229    8599 kubeadm.go:319] 
	I1019 16:21:16.858323    8599 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 5153m7.ghqmp7zdo9wx0usq \
	I1019 16:21:16.858476    8599 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:861d72ec119aad1188a44160bac431ea1f9fef8d67df56e962f86d9bbf64f1d3 \
	I1019 16:21:16.858505    8599 kubeadm.go:319] 	--control-plane 
	I1019 16:21:16.858515    8599 kubeadm.go:319] 
	I1019 16:21:16.858634    8599 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1019 16:21:16.858644    8599 kubeadm.go:319] 
	I1019 16:21:16.858784    8599 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 5153m7.ghqmp7zdo9wx0usq \
	I1019 16:21:16.858965    8599 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:861d72ec119aad1188a44160bac431ea1f9fef8d67df56e962f86d9bbf64f1d3 
	I1019 16:21:16.860518    8599 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1019 16:21:16.860646    8599 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1019 16:21:16.860691    8599 cni.go:84] Creating CNI manager for ""
	I1019 16:21:16.860705    8599 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 16:21:16.863213    8599 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1019 16:21:16.864381    8599 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1019 16:21:16.868639    8599 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1019 16:21:16.868655    8599 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1019 16:21:16.882258    8599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1019 16:21:17.083995    8599 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1019 16:21:17.084135    8599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 16:21:17.084180    8599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-557770 minikube.k8s.io/updated_at=2025_10_19T16_21_17_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34 minikube.k8s.io/name=addons-557770 minikube.k8s.io/primary=true
	I1019 16:21:17.093579    8599 ops.go:34] apiserver oom_adj: -16
	I1019 16:21:17.170414    8599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 16:21:17.671280    8599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 16:21:18.170534    8599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 16:21:18.670771    8599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 16:21:19.171262    8599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 16:21:19.671244    8599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 16:21:20.171375    8599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 16:21:20.670845    8599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 16:21:21.170704    8599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 16:21:21.671323    8599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 16:21:21.737075    8599 kubeadm.go:1114] duration metric: took 4.652977892s to wait for elevateKubeSystemPrivileges
	I1019 16:21:21.737117    8599 kubeadm.go:403] duration metric: took 14.588636179s to StartCluster
	I1019 16:21:21.737142    8599 settings.go:142] acquiring lock: {Name:mk205bd8d663b4a1850184d0f589700c0d2429b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:21:21.737266    8599 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-3731/kubeconfig
	I1019 16:21:21.737725    8599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/kubeconfig: {Name:mk4cdfafbafeae33568865a60cf929c656d55c5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:21:21.738906    8599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1019 16:21:21.738923    8599 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 16:21:21.739015    8599 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1019 16:21:21.739135    8599 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-557770"
	I1019 16:21:21.739157    8599 addons.go:70] Setting yakd=true in profile "addons-557770"
	I1019 16:21:21.739189    8599 addons.go:239] Setting addon yakd=true in "addons-557770"
	I1019 16:21:21.739198    8599 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-557770"
	I1019 16:21:21.739188    8599 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-557770"
	I1019 16:21:21.739218    8599 host.go:66] Checking if "addons-557770" exists ...
	I1019 16:21:21.739223    8599 addons.go:70] Setting cloud-spanner=true in profile "addons-557770"
	I1019 16:21:21.739222    8599 config.go:182] Loaded profile config "addons-557770": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:21:21.739228    8599 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-557770"
	I1019 16:21:21.739234    8599 addons.go:239] Setting addon cloud-spanner=true in "addons-557770"
	I1019 16:21:21.739249    8599 host.go:66] Checking if "addons-557770" exists ...
	I1019 16:21:21.739225    8599 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-557770"
	I1019 16:21:21.739252    8599 addons.go:70] Setting storage-provisioner=true in profile "addons-557770"
	I1019 16:21:21.739291    8599 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-557770"
	I1019 16:21:21.739295    8599 host.go:66] Checking if "addons-557770" exists ...
	I1019 16:21:21.739303    8599 addons.go:239] Setting addon storage-provisioner=true in "addons-557770"
	I1019 16:21:21.739343    8599 host.go:66] Checking if "addons-557770" exists ...
	I1019 16:21:21.739218    8599 host.go:66] Checking if "addons-557770" exists ...
	I1019 16:21:21.739350    8599 host.go:66] Checking if "addons-557770" exists ...
	I1019 16:21:21.739391    8599 addons.go:70] Setting registry=true in profile "addons-557770"
	I1019 16:21:21.739409    8599 addons.go:239] Setting addon registry=true in "addons-557770"
	I1019 16:21:21.739421    8599 addons.go:70] Setting ingress=true in profile "addons-557770"
	I1019 16:21:21.739938    8599 addons.go:70] Setting volcano=true in profile "addons-557770"
	I1019 16:21:21.739955    8599 addons.go:239] Setting addon ingress=true in "addons-557770"
	I1019 16:21:21.739951    8599 addons.go:70] Setting default-storageclass=true in profile "addons-557770"
	I1019 16:21:21.739974    8599 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-557770"
	I1019 16:21:21.739978    8599 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-557770"
	I1019 16:21:21.740512    8599 cli_runner.go:164] Run: docker container inspect addons-557770 --format={{.State.Status}}
	I1019 16:21:21.740559    8599 cli_runner.go:164] Run: docker container inspect addons-557770 --format={{.State.Status}}
	I1019 16:21:21.740618    8599 cli_runner.go:164] Run: docker container inspect addons-557770 --format={{.State.Status}}
	I1019 16:21:21.740764    8599 cli_runner.go:164] Run: docker container inspect addons-557770 --format={{.State.Status}}
	I1019 16:21:21.739964    8599 addons.go:239] Setting addon volcano=true in "addons-557770"
	I1019 16:21:21.740847    8599 host.go:66] Checking if "addons-557770" exists ...
	I1019 16:21:21.741151    8599 cli_runner.go:164] Run: docker container inspect addons-557770 --format={{.State.Status}}
	I1019 16:21:21.741401    8599 cli_runner.go:164] Run: docker container inspect addons-557770 --format={{.State.Status}}
	I1019 16:21:21.743047    8599 cli_runner.go:164] Run: docker container inspect addons-557770 --format={{.State.Status}}
	I1019 16:21:21.739847    8599 host.go:66] Checking if "addons-557770" exists ...
	I1019 16:21:21.744192    8599 cli_runner.go:164] Run: docker container inspect addons-557770 --format={{.State.Status}}
	I1019 16:21:21.739914    8599 addons.go:70] Setting inspektor-gadget=true in profile "addons-557770"
	I1019 16:21:21.745037    8599 addons.go:239] Setting addon inspektor-gadget=true in "addons-557770"
	I1019 16:21:21.745095    8599 host.go:66] Checking if "addons-557770" exists ...
	I1019 16:21:21.745653    8599 cli_runner.go:164] Run: docker container inspect addons-557770 --format={{.State.Status}}
	I1019 16:21:21.739921    8599 addons.go:70] Setting metrics-server=true in profile "addons-557770"
	I1019 16:21:21.746912    8599 addons.go:239] Setting addon metrics-server=true in "addons-557770"
	I1019 16:21:21.746944    8599 host.go:66] Checking if "addons-557770" exists ...
	I1019 16:21:21.747116    8599 out.go:179] * Verifying Kubernetes components...
	I1019 16:21:21.747346    8599 cli_runner.go:164] Run: docker container inspect addons-557770 --format={{.State.Status}}
	I1019 16:21:21.739991    8599 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-557770"
	I1019 16:21:21.739780    8599 addons.go:70] Setting ingress-dns=true in profile "addons-557770"
	I1019 16:21:21.740011    8599 addons.go:70] Setting gcp-auth=true in profile "addons-557770"
	I1019 16:21:21.740011    8599 addons.go:70] Setting registry-creds=true in profile "addons-557770"
	I1019 16:21:21.740049    8599 addons.go:70] Setting volumesnapshots=true in profile "addons-557770"
	I1019 16:21:21.740129    8599 host.go:66] Checking if "addons-557770" exists ...
	I1019 16:21:21.748578    8599 cli_runner.go:164] Run: docker container inspect addons-557770 --format={{.State.Status}}
	I1019 16:21:21.749437    8599 mustload.go:66] Loading cluster: addons-557770
	I1019 16:21:21.749617    8599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 16:21:21.749722    8599 config.go:182] Loaded profile config "addons-557770": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:21:21.750048    8599 cli_runner.go:164] Run: docker container inspect addons-557770 --format={{.State.Status}}
	I1019 16:21:21.750137    8599 cli_runner.go:164] Run: docker container inspect addons-557770 --format={{.State.Status}}
	I1019 16:21:21.751946    8599 addons.go:239] Setting addon ingress-dns=true in "addons-557770"
	I1019 16:21:21.752029    8599 host.go:66] Checking if "addons-557770" exists ...
	I1019 16:21:21.753243    8599 addons.go:239] Setting addon registry-creds=true in "addons-557770"
	I1019 16:21:21.753268    8599 addons.go:239] Setting addon volumesnapshots=true in "addons-557770"
	I1019 16:21:21.753285    8599 host.go:66] Checking if "addons-557770" exists ...
	I1019 16:21:21.753314    8599 host.go:66] Checking if "addons-557770" exists ...
	I1019 16:21:21.753975    8599 cli_runner.go:164] Run: docker container inspect addons-557770 --format={{.State.Status}}
	I1019 16:21:21.768532    8599 cli_runner.go:164] Run: docker container inspect addons-557770 --format={{.State.Status}}
	I1019 16:21:21.774113    8599 cli_runner.go:164] Run: docker container inspect addons-557770 --format={{.State.Status}}
	I1019 16:21:21.774304    8599 cli_runner.go:164] Run: docker container inspect addons-557770 --format={{.State.Status}}
	I1019 16:21:21.795770    8599 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1019 16:21:21.797396    8599 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1019 16:21:21.799206    8599 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1019 16:21:21.799296    8599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1019 16:21:21.799392    8599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:21:21.801774    8599 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1019 16:21:21.805322    8599 addons.go:239] Setting addon default-storageclass=true in "addons-557770"
	I1019 16:21:21.805372    8599 host.go:66] Checking if "addons-557770" exists ...
	I1019 16:21:21.805881    8599 cli_runner.go:164] Run: docker container inspect addons-557770 --format={{.State.Status}}
	I1019 16:21:21.806058    8599 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1019 16:21:21.807212    8599 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1019 16:21:21.808273    8599 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	W1019 16:21:21.808318    8599 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1019 16:21:21.810418    8599 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1019 16:21:21.811876    8599 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 16:21:21.812332    8599 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1019 16:21:21.813620    8599 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 16:21:21.813675    8599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 16:21:21.813811    8599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:21:21.816647    8599 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1019 16:21:21.817961    8599 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1019 16:21:21.818007    8599 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1019 16:21:21.818360    8599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:21:21.834902    8599 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1019 16:21:21.843463    8599 addons.go:436] installing /etc/kubernetes/addons/ig-crd.yaml
	I1019 16:21:21.843491    8599 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1019 16:21:21.843568    8599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:21:21.849613    8599 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1019 16:21:21.849832    8599 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1019 16:21:21.851171    8599 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1019 16:21:21.851193    8599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1019 16:21:21.851267    8599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:21:21.851691    8599 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1019 16:21:21.851873    8599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1019 16:21:21.851938    8599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:21:21.853773    8599 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-557770"
	I1019 16:21:21.853823    8599 host.go:66] Checking if "addons-557770" exists ...
	I1019 16:21:21.854313    8599 cli_runner.go:164] Run: docker container inspect addons-557770 --format={{.State.Status}}
	I1019 16:21:21.854442    8599 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1019 16:21:21.855750    8599 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1019 16:21:21.855771    8599 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1019 16:21:21.855837    8599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:21:21.861122    8599 host.go:66] Checking if "addons-557770" exists ...
	I1019 16:21:21.865062    8599 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1019 16:21:21.866423    8599 out.go:179]   - Using image docker.io/registry:3.0.0
	I1019 16:21:21.867508    8599 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1019 16:21:21.867529    8599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1019 16:21:21.867590    8599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:21:21.876235    8599 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1019 16:21:21.877934    8599 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1019 16:21:21.877964    8599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1019 16:21:21.878040    8599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:21:21.884105    8599 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1019 16:21:21.885295    8599 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1019 16:21:21.885320    8599 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1019 16:21:21.885384    8599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:21:21.896231    8599 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1019 16:21:21.901312    8599 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1019 16:21:21.901337    8599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1019 16:21:21.901402    8599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:21:21.908975    8599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/addons-557770/id_rsa Username:docker}
	I1019 16:21:21.909442    8599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/addons-557770/id_rsa Username:docker}
	I1019 16:21:21.912583    8599 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1019 16:21:21.912828    8599 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 16:21:21.914665    8599 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 16:21:21.914745    8599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:21:21.916578    8599 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1019 16:21:21.917952    8599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/addons-557770/id_rsa Username:docker}
	I1019 16:21:21.927691    8599 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1019 16:21:21.929042    8599 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1019 16:21:21.929115    8599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1019 16:21:21.929186    8599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:21:21.929970    8599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1019 16:21:21.931019    8599 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1019 16:21:21.936265    8599 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1019 16:21:21.936293    8599 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1019 16:21:21.936362    8599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:21:21.938700    8599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/addons-557770/id_rsa Username:docker}
	I1019 16:21:21.938709    8599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/addons-557770/id_rsa Username:docker}
	I1019 16:21:21.944725    8599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/addons-557770/id_rsa Username:docker}
	I1019 16:21:21.947378    8599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/addons-557770/id_rsa Username:docker}
	I1019 16:21:21.953516    8599 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 16:21:21.956325    8599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/addons-557770/id_rsa Username:docker}
	I1019 16:21:21.956997    8599 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1019 16:21:21.958324    8599 out.go:179]   - Using image docker.io/busybox:stable
	I1019 16:21:21.959502    8599 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1019 16:21:21.959574    8599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1019 16:21:21.959677    8599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:21:21.961432    8599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/addons-557770/id_rsa Username:docker}
	I1019 16:21:21.976654    8599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/addons-557770/id_rsa Username:docker}
	I1019 16:21:21.977275    8599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/addons-557770/id_rsa Username:docker}
	W1019 16:21:21.985010    8599 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1019 16:21:21.985052    8599 retry.go:31] will retry after 269.680961ms: ssh: handshake failed: EOF
	I1019 16:21:21.985128    8599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/addons-557770/id_rsa Username:docker}
	I1019 16:21:21.993541    8599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/addons-557770/id_rsa Username:docker}
	I1019 16:21:21.993805    8599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/addons-557770/id_rsa Username:docker}
	I1019 16:21:21.999726    8599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/addons-557770/id_rsa Username:docker}
	I1019 16:21:22.089692    8599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1019 16:21:22.093735    8599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 16:21:22.113821    8599 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1019 16:21:22.113846    8599 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1019 16:21:22.129497    8599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 16:21:22.129817    8599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1019 16:21:22.134542    8599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1019 16:21:22.137489    8599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1019 16:21:22.144615    8599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1019 16:21:22.152505    8599 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1019 16:21:22.152545    8599 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1019 16:21:22.163978    8599 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:21:22.164005    8599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1019 16:21:22.174929    8599 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1019 16:21:22.174967    8599 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1019 16:21:22.177749    8599 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1019 16:21:22.177850    8599 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1019 16:21:22.178779    8599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1019 16:21:22.182794    8599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1019 16:21:22.193240    8599 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1019 16:21:22.193263    8599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1019 16:21:22.223635    8599 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1019 16:21:22.223667    8599 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1019 16:21:22.224822    8599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:21:22.227208    8599 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1019 16:21:22.227306    8599 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1019 16:21:22.233227    8599 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1019 16:21:22.233277    8599 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1019 16:21:22.260624    8599 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1019 16:21:22.260652    8599 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1019 16:21:22.301909    8599 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1019 16:21:22.302111    8599 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1019 16:21:22.302149    8599 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1019 16:21:22.302201    8599 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1019 16:21:22.330939    8599 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1019 16:21:22.330963    8599 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1019 16:21:22.344440    8599 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1019 16:21:22.344464    8599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1019 16:21:22.369678    8599 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1019 16:21:22.369775    8599 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1019 16:21:22.372840    8599 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1019 16:21:22.374692    8599 node_ready.go:35] waiting up to 6m0s for node "addons-557770" to be "Ready" ...
	I1019 16:21:22.375171    8599 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1019 16:21:22.375231    8599 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1019 16:21:22.396500    8599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1019 16:21:22.411451    8599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1019 16:21:22.437855    8599 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1019 16:21:22.437967    8599 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1019 16:21:22.463694    8599 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1019 16:21:22.463722    8599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1019 16:21:22.489516    8599 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1019 16:21:22.489551    8599 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1019 16:21:22.514670    8599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1019 16:21:22.524560    8599 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1019 16:21:22.524588    8599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1019 16:21:22.550477    8599 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1019 16:21:22.550501    8599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1019 16:21:22.580578    8599 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1019 16:21:22.580620    8599 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1019 16:21:22.615656    8599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1019 16:21:22.648366    8599 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1019 16:21:22.648411    8599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1019 16:21:22.696862    8599 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1019 16:21:22.696893    8599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1019 16:21:22.751350    8599 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1019 16:21:22.751392    8599 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1019 16:21:22.807702    8599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1019 16:21:22.877642    8599 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-557770" context rescaled to 1 replicas
	I1019 16:21:23.430793    8599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.247961964s)
	I1019 16:21:23.430931    8599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.206015493s)
	W1019 16:21:23.430954    8599 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:21:23.430971    8599 retry.go:31] will retry after 334.626482ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:21:23.431046    8599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.034437139s)
	I1019 16:21:23.431077    8599 addons.go:480] Verifying addon metrics-server=true in "addons-557770"
	I1019 16:21:23.431135    8599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.019576953s)
	I1019 16:21:23.431362    8599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.252554566s)
	I1019 16:21:23.431403    8599 addons.go:480] Verifying addon ingress=true in "addons-557770"
	I1019 16:21:23.433386    8599 out.go:179] * Verifying ingress addon...
	I1019 16:21:23.433405    8599 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-557770 service yakd-dashboard -n yakd-dashboard
	
	I1019 16:21:23.436521    8599 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1019 16:21:23.439331    8599 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1019 16:21:23.439354    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:23.766211    8599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:21:23.839635    8599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.324916614s)
	W1019 16:21:23.839688    8599 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1019 16:21:23.839712    8599 retry.go:31] will retry after 177.211195ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1019 16:21:23.839740    8599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.223998911s)
	I1019 16:21:23.839772    8599 addons.go:480] Verifying addon registry=true in "addons-557770"
	I1019 16:21:23.839907    8599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.032167186s)
	I1019 16:21:23.839940    8599 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-557770"
	I1019 16:21:23.841504    8599 out.go:179] * Verifying csi-hostpath-driver addon...
	I1019 16:21:23.841523    8599 out.go:179] * Verifying registry addon...
	I1019 16:21:23.844121    8599 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1019 16:21:23.844178    8599 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1019 16:21:23.847540    8599 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1019 16:21:23.847583    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:23.848747    8599 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1019 16:21:23.848769    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:23.948229    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:24.017324    8599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1019 16:21:24.347182    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:24.347241    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1019 16:21:24.364329    8599 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:21:24.364365    8599 retry.go:31] will retry after 522.75767ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1019 16:21:24.377945    8599 node_ready.go:57] node "addons-557770" has "Ready":"False" status (will retry)
	I1019 16:21:24.439594    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:24.847824    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:24.847873    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:24.887778    8599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:21:24.949212    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:25.347648    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:25.347776    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:25.439394    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:25.848036    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:25.848185    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:25.949360    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:26.347917    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:26.347964    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:26.449787    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:26.518856    8599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.501313649s)
	I1019 16:21:26.518938    8599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.631127797s)
	W1019 16:21:26.518976    8599 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:21:26.519000    8599 retry.go:31] will retry after 569.326931ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:21:26.847794    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:26.847841    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1019 16:21:26.877281    8599 node_ready.go:57] node "addons-557770" has "Ready":"False" status (will retry)
	I1019 16:21:26.948802    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:27.089359    8599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:21:27.348571    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:27.348571    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:27.440390    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1019 16:21:27.627530    8599 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:21:27.627562    8599 retry.go:31] will retry after 747.557854ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:21:27.847307    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:27.847423    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:27.948149    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:28.347636    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:28.347773    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:28.375847    8599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:21:28.440176    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:28.847352    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:28.847391    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 16:21:28.915645    8599 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:21:28.915681    8599 retry.go:31] will retry after 1.278947633s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:21:28.948481    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:29.347689    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:29.347730    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 16:21:29.378279    8599 node_ready.go:57] node "addons-557770" has "Ready":"False" status (will retry)
	I1019 16:21:29.440288    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:29.475383    8599 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1019 16:21:29.475448    8599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:21:29.494087    8599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/addons-557770/id_rsa Username:docker}
	I1019 16:21:29.604770    8599 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1019 16:21:29.618018    8599 addons.go:239] Setting addon gcp-auth=true in "addons-557770"
	I1019 16:21:29.618102    8599 host.go:66] Checking if "addons-557770" exists ...
	I1019 16:21:29.618457    8599 cli_runner.go:164] Run: docker container inspect addons-557770 --format={{.State.Status}}
	I1019 16:21:29.636259    8599 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1019 16:21:29.636337    8599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:21:29.654446    8599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/addons-557770/id_rsa Username:docker}
	I1019 16:21:29.749334    8599 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1019 16:21:29.750803    8599 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1019 16:21:29.752354    8599 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1019 16:21:29.752375    8599 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1019 16:21:29.766838    8599 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1019 16:21:29.766861    8599 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1019 16:21:29.780082    8599 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1019 16:21:29.780113    8599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1019 16:21:29.793552    8599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1019 16:21:29.847675    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:29.847706    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:29.939720    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:30.113966    8599 addons.go:480] Verifying addon gcp-auth=true in "addons-557770"
	I1019 16:21:30.119176    8599 out.go:179] * Verifying gcp-auth addon...
	I1019 16:21:30.121484    8599 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1019 16:21:30.127927    8599 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1019 16:21:30.127955    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:30.195085    8599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:21:30.347673    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:30.347706    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:30.440213    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:30.624819    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1019 16:21:30.730340    8599 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:21:30.730370    8599 retry.go:31] will retry after 2.40768445s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:21:30.847717    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:30.847893    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:30.940383    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:31.125201    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:31.347856    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:31.347870    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:31.440085    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:31.624701    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:31.846645    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:31.846806    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 16:21:31.877255    8599 node_ready.go:57] node "addons-557770" has "Ready":"False" status (will retry)
	I1019 16:21:31.939924    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:32.124996    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:32.347694    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:32.347795    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:32.440320    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:32.624879    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:32.847568    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:32.847596    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:32.939975    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:33.124766    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:33.138926    8599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:21:33.347196    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:33.347304    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:33.439634    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:33.624184    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1019 16:21:33.679127    8599 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:21:33.679159    8599 retry.go:31] will retry after 1.514965587s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:21:33.846780    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:33.846867    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:33.939802    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:34.124492    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:34.347536    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:34.347652    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 16:21:34.378092    8599 node_ready.go:57] node "addons-557770" has "Ready":"False" status (will retry)
	I1019 16:21:34.439515    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:34.624993    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:34.847653    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:34.847675    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:34.939749    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:35.124639    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:35.194827    8599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:21:35.347445    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:35.347554    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:35.440111    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:35.625476    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1019 16:21:35.733170    8599 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:21:35.733202    8599 retry.go:31] will retry after 5.197682713s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:21:35.846780    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:35.846791    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:35.940299    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:36.125213    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:36.346799    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:36.346958    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:36.439447    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:36.625130    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:36.847017    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:36.847032    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 16:21:36.877569    8599 node_ready.go:57] node "addons-557770" has "Ready":"False" status (will retry)
	I1019 16:21:36.940329    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:37.125890    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:37.347810    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:37.347835    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:37.439390    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:37.625172    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:37.846741    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:37.846847    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:37.940538    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:38.124207    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:38.347186    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:38.347250    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:38.439308    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:38.624925    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:38.847682    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:38.847812    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:38.940089    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:39.124938    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:39.347968    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:39.348101    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 16:21:39.377489    8599 node_ready.go:57] node "addons-557770" has "Ready":"False" status (will retry)
	I1019 16:21:39.440193    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:39.624804    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:39.847408    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:39.847579    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:39.939713    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:40.124199    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:40.347625    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:40.347754    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:40.440164    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:40.624628    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:40.847598    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:40.847601    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:40.931857    8599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:21:40.940055    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:41.125246    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:41.346824    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:41.346935    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:41.439995    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1019 16:21:41.476358    8599 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:21:41.476385    8599 retry.go:31] will retry after 5.864833014s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:21:41.625126    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:41.846932    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:41.847084    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 16:21:41.877415    8599 node_ready.go:57] node "addons-557770" has "Ready":"False" status (will retry)
	I1019 16:21:41.940255    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:42.124792    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:42.347692    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:42.347700    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:42.440045    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:42.624617    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:42.847322    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:42.847331    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:42.939803    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:43.124836    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:43.347494    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:43.347519    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:43.439937    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:43.624511    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:43.847033    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:43.847180    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 16:21:43.877472    8599 node_ready.go:57] node "addons-557770" has "Ready":"False" status (will retry)
	I1019 16:21:43.940285    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:44.124822    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:44.347761    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:44.347779    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:44.440183    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:44.624814    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:44.847812    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:44.847855    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:44.940237    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:45.125448    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:45.347635    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:45.347683    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:45.440250    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:45.624933    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:45.847737    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:45.847818    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:45.939831    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:46.124298    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:46.347099    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:46.347227    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1019 16:21:46.377696    8599 node_ready.go:57] node "addons-557770" has "Ready":"False" status (will retry)
	I1019 16:21:46.439489    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:46.624899    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:46.848165    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:46.848186    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:46.939708    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:47.124957    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:47.342309    8599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:21:47.347368    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:47.347418    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:47.439996    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:47.624630    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:47.847103    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:47.847227    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 16:21:47.874716    8599 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:21:47.874756    8599 retry.go:31] will retry after 13.58717238s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:21:47.940422    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:48.124937    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:48.347703    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:48.347801    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:48.439898    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:48.624709    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:48.847530    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:48.847573    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 16:21:48.878043    8599 node_ready.go:57] node "addons-557770" has "Ready":"False" status (will retry)
	I1019 16:21:48.939799    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:49.124298    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:49.346869    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:49.346885    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:49.440285    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:49.624931    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:49.847646    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:49.847828    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:49.940016    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:50.124699    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:50.347687    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:50.347694    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:50.440424    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:50.624857    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:50.847753    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:50.847872    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:50.940361    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:51.124839    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:51.349928    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:51.350094    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 16:21:51.377672    8599 node_ready.go:57] node "addons-557770" has "Ready":"False" status (will retry)
	I1019 16:21:51.439390    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:51.625045    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:51.847022    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:51.847143    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:51.939895    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:52.125023    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:52.347960    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:52.347953    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:52.439324    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:52.624988    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:52.847853    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:52.847940    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:52.940092    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:53.124601    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:53.347384    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:53.347455    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 16:21:53.378428    8599 node_ready.go:57] node "addons-557770" has "Ready":"False" status (will retry)
	I1019 16:21:53.440165    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:53.624628    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:53.847520    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:53.847573    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:53.940040    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:54.124624    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:54.347333    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:54.347440    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:54.439766    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:54.624418    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:54.847682    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:54.847710    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:54.940169    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:55.124908    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:55.347976    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:55.348000    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:55.440466    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:55.625125    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:55.846831    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:55.846947    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 16:21:55.877586    8599 node_ready.go:57] node "addons-557770" has "Ready":"False" status (will retry)
	I1019 16:21:55.939172    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:56.124758    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:56.347554    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:56.347583    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:56.440326    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:56.624885    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:56.847763    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:56.847795    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:56.940457    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:57.125168    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:57.346762    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:57.346867    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:57.440277    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:57.624897    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:57.847687    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:57.847692    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1019 16:21:57.878060    8599 node_ready.go:57] node "addons-557770" has "Ready":"False" status (will retry)
	I1019 16:21:57.939934    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:58.124597    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:58.347572    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:58.347608    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:58.439952    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:58.624617    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:58.847365    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:58.847450    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:58.939659    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:59.124267    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:59.347130    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:59.347187    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:59.439113    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:59.624112    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:59.846847    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:59.846918    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:59.940008    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:00.124419    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:00.347373    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:00.347387    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 16:22:00.377704    8599 node_ready.go:57] node "addons-557770" has "Ready":"False" status (will retry)
	I1019 16:22:00.439558    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:00.624095    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:00.846675    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:00.846685    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:00.939976    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:01.124552    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:01.347309    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:01.347426    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:01.439616    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:01.462721    8599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:22:01.624382    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:01.847773    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:01.847868    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:01.939963    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1019 16:22:02.001806    8599 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:22:02.001840    8599 retry.go:31] will retry after 11.85035315s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:22:02.124424    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:02.347204    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:02.347308    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 16:22:02.378028    8599 node_ready.go:57] node "addons-557770" has "Ready":"False" status (will retry)
	I1019 16:22:02.439905    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:02.624289    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:02.846955    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:02.847039    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:02.940498    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:03.128318    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:03.348632    8599 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1019 16:22:03.348657    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:03.348861    8599 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1019 16:22:03.348884    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:03.377889    8599 node_ready.go:49] node "addons-557770" is "Ready"
	I1019 16:22:03.377924    8599 node_ready.go:38] duration metric: took 41.003209654s for node "addons-557770" to be "Ready" ...
	I1019 16:22:03.377943    8599 api_server.go:52] waiting for apiserver process to appear ...
	I1019 16:22:03.377999    8599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 16:22:03.398273    8599 api_server.go:72] duration metric: took 41.659315703s to wait for apiserver process to appear ...
	I1019 16:22:03.398305    8599 api_server.go:88] waiting for apiserver healthz status ...
	I1019 16:22:03.398329    8599 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1019 16:22:03.404322    8599 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1019 16:22:03.405580    8599 api_server.go:141] control plane version: v1.34.1
	I1019 16:22:03.405615    8599 api_server.go:131] duration metric: took 7.30174ms to wait for apiserver health ...
	I1019 16:22:03.405626    8599 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 16:22:03.448952    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:03.450376    8599 system_pods.go:59] 20 kube-system pods found
	I1019 16:22:03.450487    8599 system_pods.go:61] "amd-gpu-device-plugin-66kws" [583f9bcd-aa6d-49aa-a883-8647ec131d3f] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1019 16:22:03.450577    8599 system_pods.go:61] "coredns-66bc5c9577-2p98v" [cbf64d34-66dc-4b0c-a26e-683f5a1493d0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 16:22:03.450603    8599 system_pods.go:61] "csi-hostpath-attacher-0" [0e47eaab-388b-48ea-b21a-d5358c786d55] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1019 16:22:03.450637    8599 system_pods.go:61] "csi-hostpath-resizer-0" [4bc94788-dde1-4e39-a836-7ee397bbfc20] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1019 16:22:03.450648    8599 system_pods.go:61] "csi-hostpathplugin-vvt5x" [0d9d010b-5e2d-4d3a-ade4-d3b5c6f3e597] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1019 16:22:03.450654    8599 system_pods.go:61] "etcd-addons-557770" [2a19f971-beeb-430b-9fb0-1bcbef816b18] Running
	I1019 16:22:03.450660    8599 system_pods.go:61] "kindnet-qbbdx" [6665252f-6f3c-437d-82ee-a664d8b2e0f9] Running
	I1019 16:22:03.450668    8599 system_pods.go:61] "kube-apiserver-addons-557770" [1821f9dd-e6bd-4635-8d51-11fcf09ee5ed] Running
	I1019 16:22:03.450674    8599 system_pods.go:61] "kube-controller-manager-addons-557770" [8dbfdee4-fcbb-48f3-abc4-9d0cab4764b5] Running
	I1019 16:22:03.450685    8599 system_pods.go:61] "kube-ingress-dns-minikube" [818774a0-0653-4521-93ca-ba3404f8c482] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1019 16:22:03.450691    8599 system_pods.go:61] "kube-proxy-zp9mk" [003abdd5-73da-456e-b519-34ed06ba8fa2] Running
	I1019 16:22:03.450698    8599 system_pods.go:61] "kube-scheduler-addons-557770" [3a398103-d56b-4dcd-87a0-cfe43844520e] Running
	I1019 16:22:03.450706    8599 system_pods.go:61] "metrics-server-85b7d694d7-6qb49" [ccf97c38-af56-4fdc-a1eb-238e1f9c98f7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1019 16:22:03.450717    8599 system_pods.go:61] "nvidia-device-plugin-daemonset-5d5sr" [b4d3ae84-fa02-4220-af1a-6d1eba3ff1a6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1019 16:22:03.450727    8599 system_pods.go:61] "registry-6b586f9694-fcnms" [85084e74-70aa-4ec6-a747-cd19730ff37b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1019 16:22:03.450738    8599 system_pods.go:61] "registry-creds-764b6fb674-9zcvj" [ae105e8b-c740-4d2e-8cbf-ac8ec523125c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1019 16:22:03.450746    8599 system_pods.go:61] "registry-proxy-cbqn4" [3d7d6881-00f4-45ae-aa7e-0d2b40fe10b2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1019 16:22:03.450763    8599 system_pods.go:61] "snapshot-controller-7d9fbc56b8-7w96c" [fc3aea78-7c62-4898-8a06-826e86881a70] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 16:22:03.450774    8599 system_pods.go:61] "snapshot-controller-7d9fbc56b8-g8g8j" [1c50c7f6-05c3-4444-a642-7d2cbd98fed7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 16:22:03.450783    8599 system_pods.go:61] "storage-provisioner" [1b036529-5685-4c48-b9df-a83ee5b242ea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 16:22:03.450795    8599 system_pods.go:74] duration metric: took 45.16243ms to wait for pod list to return data ...
	I1019 16:22:03.450808    8599 default_sa.go:34] waiting for default service account to be created ...
	I1019 16:22:03.455398    8599 default_sa.go:45] found service account: "default"
	I1019 16:22:03.455430    8599 default_sa.go:55] duration metric: took 4.610977ms for default service account to be created ...
	I1019 16:22:03.455442    8599 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 16:22:03.550425    8599 system_pods.go:86] 20 kube-system pods found
	I1019 16:22:03.550461    8599 system_pods.go:89] "amd-gpu-device-plugin-66kws" [583f9bcd-aa6d-49aa-a883-8647ec131d3f] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1019 16:22:03.550475    8599 system_pods.go:89] "coredns-66bc5c9577-2p98v" [cbf64d34-66dc-4b0c-a26e-683f5a1493d0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 16:22:03.550483    8599 system_pods.go:89] "csi-hostpath-attacher-0" [0e47eaab-388b-48ea-b21a-d5358c786d55] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1019 16:22:03.550488    8599 system_pods.go:89] "csi-hostpath-resizer-0" [4bc94788-dde1-4e39-a836-7ee397bbfc20] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1019 16:22:03.550494    8599 system_pods.go:89] "csi-hostpathplugin-vvt5x" [0d9d010b-5e2d-4d3a-ade4-d3b5c6f3e597] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1019 16:22:03.550498    8599 system_pods.go:89] "etcd-addons-557770" [2a19f971-beeb-430b-9fb0-1bcbef816b18] Running
	I1019 16:22:03.550502    8599 system_pods.go:89] "kindnet-qbbdx" [6665252f-6f3c-437d-82ee-a664d8b2e0f9] Running
	I1019 16:22:03.550505    8599 system_pods.go:89] "kube-apiserver-addons-557770" [1821f9dd-e6bd-4635-8d51-11fcf09ee5ed] Running
	I1019 16:22:03.550509    8599 system_pods.go:89] "kube-controller-manager-addons-557770" [8dbfdee4-fcbb-48f3-abc4-9d0cab4764b5] Running
	I1019 16:22:03.550514    8599 system_pods.go:89] "kube-ingress-dns-minikube" [818774a0-0653-4521-93ca-ba3404f8c482] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1019 16:22:03.550517    8599 system_pods.go:89] "kube-proxy-zp9mk" [003abdd5-73da-456e-b519-34ed06ba8fa2] Running
	I1019 16:22:03.550522    8599 system_pods.go:89] "kube-scheduler-addons-557770" [3a398103-d56b-4dcd-87a0-cfe43844520e] Running
	I1019 16:22:03.550527    8599 system_pods.go:89] "metrics-server-85b7d694d7-6qb49" [ccf97c38-af56-4fdc-a1eb-238e1f9c98f7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1019 16:22:03.550541    8599 system_pods.go:89] "nvidia-device-plugin-daemonset-5d5sr" [b4d3ae84-fa02-4220-af1a-6d1eba3ff1a6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1019 16:22:03.550546    8599 system_pods.go:89] "registry-6b586f9694-fcnms" [85084e74-70aa-4ec6-a747-cd19730ff37b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1019 16:22:03.550553    8599 system_pods.go:89] "registry-creds-764b6fb674-9zcvj" [ae105e8b-c740-4d2e-8cbf-ac8ec523125c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1019 16:22:03.550558    8599 system_pods.go:89] "registry-proxy-cbqn4" [3d7d6881-00f4-45ae-aa7e-0d2b40fe10b2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1019 16:22:03.550564    8599 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7w96c" [fc3aea78-7c62-4898-8a06-826e86881a70] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 16:22:03.550570    8599 system_pods.go:89] "snapshot-controller-7d9fbc56b8-g8g8j" [1c50c7f6-05c3-4444-a642-7d2cbd98fed7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 16:22:03.550577    8599 system_pods.go:89] "storage-provisioner" [1b036529-5685-4c48-b9df-a83ee5b242ea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 16:22:03.550590    8599 retry.go:31] will retry after 191.402617ms: missing components: kube-dns
	I1019 16:22:03.624398    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:03.746462    8599 system_pods.go:86] 20 kube-system pods found
	I1019 16:22:03.746521    8599 system_pods.go:89] "amd-gpu-device-plugin-66kws" [583f9bcd-aa6d-49aa-a883-8647ec131d3f] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1019 16:22:03.746529    8599 system_pods.go:89] "coredns-66bc5c9577-2p98v" [cbf64d34-66dc-4b0c-a26e-683f5a1493d0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 16:22:03.746538    8599 system_pods.go:89] "csi-hostpath-attacher-0" [0e47eaab-388b-48ea-b21a-d5358c786d55] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1019 16:22:03.746547    8599 system_pods.go:89] "csi-hostpath-resizer-0" [4bc94788-dde1-4e39-a836-7ee397bbfc20] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1019 16:22:03.746553    8599 system_pods.go:89] "csi-hostpathplugin-vvt5x" [0d9d010b-5e2d-4d3a-ade4-d3b5c6f3e597] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1019 16:22:03.746558    8599 system_pods.go:89] "etcd-addons-557770" [2a19f971-beeb-430b-9fb0-1bcbef816b18] Running
	I1019 16:22:03.746562    8599 system_pods.go:89] "kindnet-qbbdx" [6665252f-6f3c-437d-82ee-a664d8b2e0f9] Running
	I1019 16:22:03.746566    8599 system_pods.go:89] "kube-apiserver-addons-557770" [1821f9dd-e6bd-4635-8d51-11fcf09ee5ed] Running
	I1019 16:22:03.746569    8599 system_pods.go:89] "kube-controller-manager-addons-557770" [8dbfdee4-fcbb-48f3-abc4-9d0cab4764b5] Running
	I1019 16:22:03.746577    8599 system_pods.go:89] "kube-ingress-dns-minikube" [818774a0-0653-4521-93ca-ba3404f8c482] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1019 16:22:03.746582    8599 system_pods.go:89] "kube-proxy-zp9mk" [003abdd5-73da-456e-b519-34ed06ba8fa2] Running
	I1019 16:22:03.746587    8599 system_pods.go:89] "kube-scheduler-addons-557770" [3a398103-d56b-4dcd-87a0-cfe43844520e] Running
	I1019 16:22:03.746594    8599 system_pods.go:89] "metrics-server-85b7d694d7-6qb49" [ccf97c38-af56-4fdc-a1eb-238e1f9c98f7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1019 16:22:03.746600    8599 system_pods.go:89] "nvidia-device-plugin-daemonset-5d5sr" [b4d3ae84-fa02-4220-af1a-6d1eba3ff1a6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1019 16:22:03.746605    8599 system_pods.go:89] "registry-6b586f9694-fcnms" [85084e74-70aa-4ec6-a747-cd19730ff37b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1019 16:22:03.746611    8599 system_pods.go:89] "registry-creds-764b6fb674-9zcvj" [ae105e8b-c740-4d2e-8cbf-ac8ec523125c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1019 16:22:03.746616    8599 system_pods.go:89] "registry-proxy-cbqn4" [3d7d6881-00f4-45ae-aa7e-0d2b40fe10b2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1019 16:22:03.746622    8599 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7w96c" [fc3aea78-7c62-4898-8a06-826e86881a70] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 16:22:03.746637    8599 system_pods.go:89] "snapshot-controller-7d9fbc56b8-g8g8j" [1c50c7f6-05c3-4444-a642-7d2cbd98fed7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 16:22:03.746646    8599 system_pods.go:89] "storage-provisioner" [1b036529-5685-4c48-b9df-a83ee5b242ea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 16:22:03.746660    8599 retry.go:31] will retry after 343.891877ms: missing components: kube-dns
	I1019 16:22:03.848132    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:03.848223    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:03.940137    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:04.095533    8599 system_pods.go:86] 20 kube-system pods found
	I1019 16:22:04.095567    8599 system_pods.go:89] "amd-gpu-device-plugin-66kws" [583f9bcd-aa6d-49aa-a883-8647ec131d3f] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1019 16:22:04.095575    8599 system_pods.go:89] "coredns-66bc5c9577-2p98v" [cbf64d34-66dc-4b0c-a26e-683f5a1493d0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 16:22:04.095582    8599 system_pods.go:89] "csi-hostpath-attacher-0" [0e47eaab-388b-48ea-b21a-d5358c786d55] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1019 16:22:04.095589    8599 system_pods.go:89] "csi-hostpath-resizer-0" [4bc94788-dde1-4e39-a836-7ee397bbfc20] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1019 16:22:04.095595    8599 system_pods.go:89] "csi-hostpathplugin-vvt5x" [0d9d010b-5e2d-4d3a-ade4-d3b5c6f3e597] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1019 16:22:04.095599    8599 system_pods.go:89] "etcd-addons-557770" [2a19f971-beeb-430b-9fb0-1bcbef816b18] Running
	I1019 16:22:04.095603    8599 system_pods.go:89] "kindnet-qbbdx" [6665252f-6f3c-437d-82ee-a664d8b2e0f9] Running
	I1019 16:22:04.095607    8599 system_pods.go:89] "kube-apiserver-addons-557770" [1821f9dd-e6bd-4635-8d51-11fcf09ee5ed] Running
	I1019 16:22:04.095610    8599 system_pods.go:89] "kube-controller-manager-addons-557770" [8dbfdee4-fcbb-48f3-abc4-9d0cab4764b5] Running
	I1019 16:22:04.095615    8599 system_pods.go:89] "kube-ingress-dns-minikube" [818774a0-0653-4521-93ca-ba3404f8c482] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1019 16:22:04.095618    8599 system_pods.go:89] "kube-proxy-zp9mk" [003abdd5-73da-456e-b519-34ed06ba8fa2] Running
	I1019 16:22:04.095621    8599 system_pods.go:89] "kube-scheduler-addons-557770" [3a398103-d56b-4dcd-87a0-cfe43844520e] Running
	I1019 16:22:04.095626    8599 system_pods.go:89] "metrics-server-85b7d694d7-6qb49" [ccf97c38-af56-4fdc-a1eb-238e1f9c98f7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1019 16:22:04.095638    8599 system_pods.go:89] "nvidia-device-plugin-daemonset-5d5sr" [b4d3ae84-fa02-4220-af1a-6d1eba3ff1a6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1019 16:22:04.095645    8599 system_pods.go:89] "registry-6b586f9694-fcnms" [85084e74-70aa-4ec6-a747-cd19730ff37b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1019 16:22:04.095652    8599 system_pods.go:89] "registry-creds-764b6fb674-9zcvj" [ae105e8b-c740-4d2e-8cbf-ac8ec523125c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1019 16:22:04.095657    8599 system_pods.go:89] "registry-proxy-cbqn4" [3d7d6881-00f4-45ae-aa7e-0d2b40fe10b2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1019 16:22:04.095661    8599 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7w96c" [fc3aea78-7c62-4898-8a06-826e86881a70] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 16:22:04.095666    8599 system_pods.go:89] "snapshot-controller-7d9fbc56b8-g8g8j" [1c50c7f6-05c3-4444-a642-7d2cbd98fed7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 16:22:04.095678    8599 system_pods.go:89] "storage-provisioner" [1b036529-5685-4c48-b9df-a83ee5b242ea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 16:22:04.095693    8599 retry.go:31] will retry after 396.766042ms: missing components: kube-dns
	I1019 16:22:04.125279    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:04.350006    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:04.351570    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:04.441698    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:04.498832    8599 system_pods.go:86] 20 kube-system pods found
	I1019 16:22:04.498873    8599 system_pods.go:89] "amd-gpu-device-plugin-66kws" [583f9bcd-aa6d-49aa-a883-8647ec131d3f] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1019 16:22:04.498882    8599 system_pods.go:89] "coredns-66bc5c9577-2p98v" [cbf64d34-66dc-4b0c-a26e-683f5a1493d0] Running
	I1019 16:22:04.498894    8599 system_pods.go:89] "csi-hostpath-attacher-0" [0e47eaab-388b-48ea-b21a-d5358c786d55] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1019 16:22:04.498902    8599 system_pods.go:89] "csi-hostpath-resizer-0" [4bc94788-dde1-4e39-a836-7ee397bbfc20] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1019 16:22:04.498911    8599 system_pods.go:89] "csi-hostpathplugin-vvt5x" [0d9d010b-5e2d-4d3a-ade4-d3b5c6f3e597] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1019 16:22:04.498926    8599 system_pods.go:89] "etcd-addons-557770" [2a19f971-beeb-430b-9fb0-1bcbef816b18] Running
	I1019 16:22:04.498933    8599 system_pods.go:89] "kindnet-qbbdx" [6665252f-6f3c-437d-82ee-a664d8b2e0f9] Running
	I1019 16:22:04.498941    8599 system_pods.go:89] "kube-apiserver-addons-557770" [1821f9dd-e6bd-4635-8d51-11fcf09ee5ed] Running
	I1019 16:22:04.498947    8599 system_pods.go:89] "kube-controller-manager-addons-557770" [8dbfdee4-fcbb-48f3-abc4-9d0cab4764b5] Running
	I1019 16:22:04.498956    8599 system_pods.go:89] "kube-ingress-dns-minikube" [818774a0-0653-4521-93ca-ba3404f8c482] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1019 16:22:04.498963    8599 system_pods.go:89] "kube-proxy-zp9mk" [003abdd5-73da-456e-b519-34ed06ba8fa2] Running
	I1019 16:22:04.498969    8599 system_pods.go:89] "kube-scheduler-addons-557770" [3a398103-d56b-4dcd-87a0-cfe43844520e] Running
	I1019 16:22:04.498978    8599 system_pods.go:89] "metrics-server-85b7d694d7-6qb49" [ccf97c38-af56-4fdc-a1eb-238e1f9c98f7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1019 16:22:04.498986    8599 system_pods.go:89] "nvidia-device-plugin-daemonset-5d5sr" [b4d3ae84-fa02-4220-af1a-6d1eba3ff1a6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1019 16:22:04.498994    8599 system_pods.go:89] "registry-6b586f9694-fcnms" [85084e74-70aa-4ec6-a747-cd19730ff37b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1019 16:22:04.499002    8599 system_pods.go:89] "registry-creds-764b6fb674-9zcvj" [ae105e8b-c740-4d2e-8cbf-ac8ec523125c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1019 16:22:04.499011    8599 system_pods.go:89] "registry-proxy-cbqn4" [3d7d6881-00f4-45ae-aa7e-0d2b40fe10b2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1019 16:22:04.499019    8599 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7w96c" [fc3aea78-7c62-4898-8a06-826e86881a70] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 16:22:04.499029    8599 system_pods.go:89] "snapshot-controller-7d9fbc56b8-g8g8j" [1c50c7f6-05c3-4444-a642-7d2cbd98fed7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 16:22:04.499034    8599 system_pods.go:89] "storage-provisioner" [1b036529-5685-4c48-b9df-a83ee5b242ea] Running
	I1019 16:22:04.499045    8599 system_pods.go:126] duration metric: took 1.043595241s to wait for k8s-apps to be running ...
	I1019 16:22:04.499055    8599 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 16:22:04.499129    8599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 16:22:04.518335    8599 system_svc.go:56] duration metric: took 19.271494ms WaitForService to wait for kubelet
	I1019 16:22:04.518365    8599 kubeadm.go:587] duration metric: took 42.779411932s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 16:22:04.518397    8599 node_conditions.go:102] verifying NodePressure condition ...
	I1019 16:22:04.522260    8599 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1019 16:22:04.522285    8599 node_conditions.go:123] node cpu capacity is 8
	I1019 16:22:04.522298    8599 node_conditions.go:105] duration metric: took 3.895909ms to run NodePressure ...
	I1019 16:22:04.522310    8599 start.go:242] waiting for startup goroutines ...
	I1019 16:22:04.625348    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:04.848190    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:04.848406    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:04.940046    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:05.125670    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:05.348240    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:05.348400    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:05.440332    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:05.625544    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:05.847809    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:05.847932    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:05.940033    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:06.125036    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:06.348294    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:06.348435    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:06.440523    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:06.625140    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:06.848864    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:06.848925    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:06.940185    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:07.126049    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:07.348516    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:07.348660    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:07.440501    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:07.625307    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:07.847476    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:07.847650    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:07.940365    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:08.125060    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:08.348418    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:08.348439    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:08.449180    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:08.625261    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:08.847368    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:08.847415    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:08.940555    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:09.125915    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:09.349236    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:09.350532    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:09.441137    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:09.625808    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:09.848294    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:09.848351    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:09.940363    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:10.125125    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:10.348470    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:10.348657    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:10.442189    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:10.625143    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:10.887630    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:10.887767    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:11.023610    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:11.125034    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:11.348233    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:11.348357    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:11.440185    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:11.625735    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:11.848157    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:11.848217    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:11.940104    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:12.125024    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:12.348592    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:12.348649    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:12.449700    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:12.624533    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:12.847405    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:12.847534    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:12.940117    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:13.125164    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:13.351243    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:13.351411    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:13.440746    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:13.624688    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:13.848158    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:13.848238    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:13.853284    8599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:22:13.939444    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:14.124879    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:14.348159    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:14.348181    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1019 16:22:14.421746    8599 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:22:14.421785    8599 retry.go:31] will retry after 29.079297243s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:22:14.449095    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:14.624785    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:14.847767    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:14.847939    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:14.940274    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:15.125147    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:15.348401    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:15.348489    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:15.440541    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:15.625238    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:15.847714    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:15.847911    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:15.940528    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:16.124407    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:16.347958    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:16.348110    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:16.440165    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:16.625040    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:16.848280    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:16.848425    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:16.940761    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:17.124881    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:17.348420    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:17.348637    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:17.440748    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:17.625692    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:17.847960    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:17.848012    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:17.939905    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:18.125719    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:18.348225    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:18.348412    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:18.440119    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:18.625460    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:18.847851    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:18.847876    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:18.948682    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:19.124585    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:19.348054    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:19.348187    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:19.448842    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:19.624679    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:19.847739    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:19.847974    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:19.939736    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:20.124858    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:20.425711    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:20.425778    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:20.484486    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:20.624852    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:20.847971    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:20.847981    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:20.940562    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:21.124356    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:21.348562    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:21.350599    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:21.442816    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:21.625245    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:21.847498    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:21.847620    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:21.940434    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:22.124336    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:22.347057    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:22.349597    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:22.439293    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:22.625298    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:22.847436    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:22.847572    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:22.940618    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:23.125508    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:23.347325    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:23.347566    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:23.439709    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:23.624457    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:23.847635    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:23.847692    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:23.940346    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:24.125089    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:24.349390    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:24.349694    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:24.441135    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:24.628113    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:24.855392    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:24.856286    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:24.940781    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:25.125096    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:25.366818    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:25.367576    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:25.481350    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:25.637112    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:25.848824    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:25.848959    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:25.940338    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:26.127830    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:26.349024    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:26.349163    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:26.440690    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:26.625568    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:26.848574    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:26.848635    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:26.940687    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:27.125106    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:27.438476    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:27.438718    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:27.440751    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:27.624687    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:27.848038    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:27.848193    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:27.940383    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:28.125573    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:28.347754    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:28.348477    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:28.439974    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:28.624869    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:28.848413    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:28.848586    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:28.940690    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:29.125413    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:29.472352    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:29.472426    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:29.472657    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:29.626180    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:29.848648    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:29.848647    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:29.940947    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:30.124842    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:30.348284    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:30.348541    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:30.440266    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:30.625233    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:30.847476    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:30.847612    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:30.948627    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:31.124869    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:31.348279    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:31.348444    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:31.440943    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:31.625542    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:32.030809    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:32.030818    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:32.030864    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:32.125137    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:32.348227    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:32.348402    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:32.448909    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:32.625491    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:32.848505    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:32.848804    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:32.940449    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:33.125238    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:33.348100    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:33.348306    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:33.440405    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:33.625704    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:33.849208    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:33.849305    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:33.939874    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:34.124898    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:34.348651    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:34.349191    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:34.441668    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:34.625417    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:34.847480    8599 kapi.go:107] duration metric: took 1m11.003299175s to wait for kubernetes.io/minikube-addons=registry ...
	I1019 16:22:34.847626    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:34.940673    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:35.124831    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:35.348442    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:35.544035    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:35.684217    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:35.848919    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:35.941932    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:36.125843    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:36.348040    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:36.440036    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:36.625174    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:36.848318    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:36.948909    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:37.127344    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:37.348894    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:37.442024    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:37.625865    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:37.848912    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:37.940386    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:38.124686    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:38.348098    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:38.441554    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:38.625115    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:38.848407    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:38.940192    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:39.125231    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:39.347759    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:39.440899    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:39.624768    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:39.848309    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:39.940276    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:40.125247    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:40.348373    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:40.440529    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:40.624927    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:40.848175    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:40.949315    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:41.126260    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:41.347676    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:41.443117    8599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:41.625187    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:41.847543    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:42.027801    8599 kapi.go:107] duration metric: took 1m18.591278832s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1019 16:22:42.162953    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:42.348326    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:42.625154    8599 kapi.go:107] duration metric: took 1m12.503666122s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1019 16:22:42.627111    8599 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-557770 cluster.
	I1019 16:22:42.628733    8599 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1019 16:22:42.630380    8599 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1019 16:22:42.847943    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:43.347488    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:43.501595    8599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:22:43.848284    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 16:22:44.139738    8599 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1019 16:22:44.139894    8599 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1019 16:22:44.348439    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:44.848354    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:45.348840    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:45.848031    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:46.347943    8599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:46.847622    8599 kapi.go:107] duration metric: took 1m23.003499672s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1019 16:22:46.849665    8599 out.go:179] * Enabled addons: cloud-spanner, storage-provisioner, registry-creds, amd-gpu-device-plugin, default-storageclass, nvidia-device-plugin, storage-provisioner-rancher, ingress-dns, metrics-server, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1019 16:22:46.851340    8599 addons.go:515] duration metric: took 1m25.112321573s for enable addons: enabled=[cloud-spanner storage-provisioner registry-creds amd-gpu-device-plugin default-storageclass nvidia-device-plugin storage-provisioner-rancher ingress-dns metrics-server yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1019 16:22:46.851384    8599 start.go:247] waiting for cluster config update ...
	I1019 16:22:46.851412    8599 start.go:256] writing updated cluster config ...
	I1019 16:22:46.851709    8599 ssh_runner.go:195] Run: rm -f paused
	I1019 16:22:46.855748    8599 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 16:22:46.858986    8599 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-2p98v" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:22:46.863498    8599 pod_ready.go:94] pod "coredns-66bc5c9577-2p98v" is "Ready"
	I1019 16:22:46.863528    8599 pod_ready.go:86] duration metric: took 4.517912ms for pod "coredns-66bc5c9577-2p98v" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:22:46.865585    8599 pod_ready.go:83] waiting for pod "etcd-addons-557770" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:22:46.869554    8599 pod_ready.go:94] pod "etcd-addons-557770" is "Ready"
	I1019 16:22:46.869588    8599 pod_ready.go:86] duration metric: took 3.980467ms for pod "etcd-addons-557770" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:22:46.871418    8599 pod_ready.go:83] waiting for pod "kube-apiserver-addons-557770" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:22:46.875183    8599 pod_ready.go:94] pod "kube-apiserver-addons-557770" is "Ready"
	I1019 16:22:46.875214    8599 pod_ready.go:86] duration metric: took 3.774406ms for pod "kube-apiserver-addons-557770" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:22:46.878840    8599 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-557770" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:22:47.260092    8599 pod_ready.go:94] pod "kube-controller-manager-addons-557770" is "Ready"
	I1019 16:22:47.260118    8599 pod_ready.go:86] duration metric: took 381.247465ms for pod "kube-controller-manager-addons-557770" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:22:47.460559    8599 pod_ready.go:83] waiting for pod "kube-proxy-zp9mk" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:22:47.886740    8599 pod_ready.go:94] pod "kube-proxy-zp9mk" is "Ready"
	I1019 16:22:47.886776    8599 pod_ready.go:86] duration metric: took 426.185807ms for pod "kube-proxy-zp9mk" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:22:48.143494    8599 pod_ready.go:83] waiting for pod "kube-scheduler-addons-557770" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:22:48.460162    8599 pod_ready.go:94] pod "kube-scheduler-addons-557770" is "Ready"
	I1019 16:22:48.460201    8599 pod_ready.go:86] duration metric: took 316.676281ms for pod "kube-scheduler-addons-557770" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:22:48.460218    8599 pod_ready.go:40] duration metric: took 1.604432361s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 16:22:48.508637    8599 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1019 16:22:48.510258    8599 out.go:179] * Done! kubectl is now configured to use "addons-557770" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 19 16:22:49 addons-557770 crio[772]: time="2025-10-19T16:22:49.364278502Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=eee7656e-7315-4111-a1ee-ecc3ffae60b3 name=/runtime.v1.ImageService/PullImage
	Oct 19 16:22:49 addons-557770 crio[772]: time="2025-10-19T16:22:49.366029378Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 19 16:22:50 addons-557770 crio[772]: time="2025-10-19T16:22:50.01930623Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=eee7656e-7315-4111-a1ee-ecc3ffae60b3 name=/runtime.v1.ImageService/PullImage
	Oct 19 16:22:50 addons-557770 crio[772]: time="2025-10-19T16:22:50.01984266Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b1b77eba-6f72-494a-b17e-198ba735ca62 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:22:50 addons-557770 crio[772]: time="2025-10-19T16:22:50.021330201Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1783e7df-1575-408f-9098-c7dcf6a7b57b name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:22:50 addons-557770 crio[772]: time="2025-10-19T16:22:50.025119167Z" level=info msg="Creating container: default/busybox/busybox" id=bd124952-e579-4730-a37f-b9fb9c607d6b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 16:22:50 addons-557770 crio[772]: time="2025-10-19T16:22:50.025860733Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 16:22:50 addons-557770 crio[772]: time="2025-10-19T16:22:50.031207375Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 16:22:50 addons-557770 crio[772]: time="2025-10-19T16:22:50.031755179Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 16:22:50 addons-557770 crio[772]: time="2025-10-19T16:22:50.074113568Z" level=info msg="Created container 517678a62c8c36cb2e21c20e59a6cba610bb42300711d41bf14aeed4ffdb62ae: default/busybox/busybox" id=bd124952-e579-4730-a37f-b9fb9c607d6b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 16:22:50 addons-557770 crio[772]: time="2025-10-19T16:22:50.074824309Z" level=info msg="Starting container: 517678a62c8c36cb2e21c20e59a6cba610bb42300711d41bf14aeed4ffdb62ae" id=8c1a5ffe-39f6-47bf-905a-6810716bc276 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 16:22:50 addons-557770 crio[772]: time="2025-10-19T16:22:50.076862439Z" level=info msg="Started container" PID=6366 containerID=517678a62c8c36cb2e21c20e59a6cba610bb42300711d41bf14aeed4ffdb62ae description=default/busybox/busybox id=8c1a5ffe-39f6-47bf-905a-6810716bc276 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b143e15b70687cf3df92e01f6d417500cc40037764a7d343e0213dc75b379caf
	Oct 19 16:22:57 addons-557770 crio[772]: time="2025-10-19T16:22:57.173968063Z" level=info msg="Running pod sandbox: local-path-storage/helper-pod-create-pvc-af712dd3-5e9b-4878-a6fc-b8b4a8617cea/POD" id=ac2ff716-6769-4d81-9864-5c7e52aeca28 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 16:22:57 addons-557770 crio[772]: time="2025-10-19T16:22:57.174054218Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 16:22:57 addons-557770 crio[772]: time="2025-10-19T16:22:57.179803927Z" level=info msg="Got pod network &{Name:helper-pod-create-pvc-af712dd3-5e9b-4878-a6fc-b8b4a8617cea Namespace:local-path-storage ID:9d043a291c19e998511c85ef0184679a0d8bde76706d85f1eac543e54bf301cf UID:abbae647-508f-451d-a8ba-c9adc7bfedec NetNS:/var/run/netns/07aeac37-e3cc-42ce-bd98-9558d010d26a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008a700}] Aliases:map[]}"
	Oct 19 16:22:57 addons-557770 crio[772]: time="2025-10-19T16:22:57.179835459Z" level=info msg="Adding pod local-path-storage_helper-pod-create-pvc-af712dd3-5e9b-4878-a6fc-b8b4a8617cea to CNI network \"kindnet\" (type=ptp)"
	Oct 19 16:22:57 addons-557770 crio[772]: time="2025-10-19T16:22:57.189924377Z" level=info msg="Got pod network &{Name:helper-pod-create-pvc-af712dd3-5e9b-4878-a6fc-b8b4a8617cea Namespace:local-path-storage ID:9d043a291c19e998511c85ef0184679a0d8bde76706d85f1eac543e54bf301cf UID:abbae647-508f-451d-a8ba-c9adc7bfedec NetNS:/var/run/netns/07aeac37-e3cc-42ce-bd98-9558d010d26a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008a700}] Aliases:map[]}"
	Oct 19 16:22:57 addons-557770 crio[772]: time="2025-10-19T16:22:57.190089233Z" level=info msg="Checking pod local-path-storage_helper-pod-create-pvc-af712dd3-5e9b-4878-a6fc-b8b4a8617cea for CNI network kindnet (type=ptp)"
	Oct 19 16:22:57 addons-557770 crio[772]: time="2025-10-19T16:22:57.191181173Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 19 16:22:57 addons-557770 crio[772]: time="2025-10-19T16:22:57.192323789Z" level=info msg="Ran pod sandbox 9d043a291c19e998511c85ef0184679a0d8bde76706d85f1eac543e54bf301cf with infra container: local-path-storage/helper-pod-create-pvc-af712dd3-5e9b-4878-a6fc-b8b4a8617cea/POD" id=ac2ff716-6769-4d81-9864-5c7e52aeca28 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 16:22:57 addons-557770 crio[772]: time="2025-10-19T16:22:57.19354003Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=e32bf738-11be-4d04-8a1a-8a2cc8752197 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:22:57 addons-557770 crio[772]: time="2025-10-19T16:22:57.193707188Z" level=info msg="Image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 not found" id=e32bf738-11be-4d04-8a1a-8a2cc8752197 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:22:57 addons-557770 crio[772]: time="2025-10-19T16:22:57.193755765Z" level=info msg="Neither image nor artfiact docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 found" id=e32bf738-11be-4d04-8a1a-8a2cc8752197 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:22:57 addons-557770 crio[772]: time="2025-10-19T16:22:57.194261558Z" level=info msg="Pulling image: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=523d8a4e-c50f-41bb-8aa4-159ae9e3d1b6 name=/runtime.v1.ImageService/PullImage
	Oct 19 16:22:57 addons-557770 crio[772]: time="2025-10-19T16:22:57.195957717Z" level=info msg="Trying to access \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	517678a62c8c3       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          7 seconds ago        Running             busybox                                  0                   b143e15b70687       busybox                                     default
	9d790d16e8959       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          12 seconds ago       Running             csi-snapshotter                          0                   e3476d9211540       csi-hostpathplugin-vvt5x                    kube-system
	5888ac56628ff       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          13 seconds ago       Running             csi-provisioner                          0                   e3476d9211540       csi-hostpathplugin-vvt5x                    kube-system
	50a236ad43b3d       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            14 seconds ago       Running             liveness-probe                           0                   e3476d9211540       csi-hostpathplugin-vvt5x                    kube-system
	05400eb2fd5eb       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           14 seconds ago       Running             hostpath                                 0                   e3476d9211540       csi-hostpathplugin-vvt5x                    kube-system
	efa983b0c0938       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 15 seconds ago       Running             gcp-auth                                 0                   b1502643499dd       gcp-auth-78565c9fb4-d8qwj                   gcp-auth
	000220bd236a5       registry.k8s.io/ingress-nginx/controller@sha256:7b4073fc95e078d863c0b0b08deb72e01d2cf629e2156822bcd394fc2bcd8e83                             17 seconds ago       Running             controller                               0                   e241852a6cbd0       ingress-nginx-controller-675c5ddd98-jcfrv   ingress-nginx
	2e93d36349d27       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                21 seconds ago       Running             node-driver-registrar                    0                   e3476d9211540       csi-hostpathplugin-vvt5x                    kube-system
	943c05844222c       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb                            21 seconds ago       Running             gadget                                   0                   95f17bddfabb1       gadget-jpd5t                                gadget
	c503d7edb96f3       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              24 seconds ago       Running             registry-proxy                           0                   10bfc35e635b9       registry-proxy-cbqn4                        kube-system
	bfa49cfea4019       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   25 seconds ago       Running             csi-external-health-monitor-controller   0                   e3476d9211540       csi-hostpathplugin-vvt5x                    kube-system
	aeccc8f632779       nvcr.io/nvidia/k8s-device-plugin@sha256:ad155f1089b64673c75b2f39258f0791cbad6d3011419726ec605196981e1c32                                     27 seconds ago       Running             nvidia-device-plugin-ctr                 0                   52909df1b2d0b       nvidia-device-plugin-daemonset-5d5sr        kube-system
	d3273937d7efc       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              34 seconds ago       Running             csi-resizer                              0                   42416c16b54d5       csi-hostpath-resizer-0                      kube-system
	3b0ecd419df99       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   35 seconds ago       Exited              patch                                    0                   5f5a41b680963       gcp-auth-certs-patch-gntdv                  gcp-auth
	6ef8a71fe5d39       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      35 seconds ago       Running             volume-snapshot-controller               0                   0018a5b9d0d6c       snapshot-controller-7d9fbc56b8-g8g8j        kube-system
	6e2091eec84de       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     35 seconds ago       Running             amd-gpu-device-plugin                    0                   aac0d45065796       amd-gpu-device-plugin-66kws                 kube-system
	037c0332147ce       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      37 seconds ago       Running             volume-snapshot-controller               0                   0eae7619bffee       snapshot-controller-7d9fbc56b8-7w96c        kube-system
	ab1a761dc93fc       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   38 seconds ago       Exited              patch                                    0                   14f9ac01bfee1       ingress-nginx-admission-patch-kb26q         ingress-nginx
	10a9a8761d34a       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   38 seconds ago       Exited              create                                   0                   e90db58466ceb       gcp-auth-certs-create-hkd8f                 gcp-auth
	243f77f2d4cc6       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   38 seconds ago       Exited              create                                   0                   c95dabd6611a9       ingress-nginx-admission-create-7tns9        ingress-nginx
	65863d6eb03db       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             39 seconds ago       Running             csi-attacher                             0                   d9d602515b856       csi-hostpath-attacher-0                     kube-system
	d904f44568c85       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              40 seconds ago       Running             yakd                                     0                   191e664d32140       yakd-dashboard-5ff678cb9-g4ltn              yakd-dashboard
	28fc6fff59e06       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             43 seconds ago       Running             local-path-provisioner                   0                   5db9d88acd8c1       local-path-provisioner-648f6765c9-gsfrf     local-path-storage
	28dca4a98797d       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           44 seconds ago       Running             registry                                 0                   66519a6c74604       registry-6b586f9694-fcnms                   kube-system
	893241a14d701       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        45 seconds ago       Running             metrics-server                           0                   4ba47595d0d03       metrics-server-85b7d694d7-6qb49             kube-system
	091ae9a183d46       gcr.io/cloud-spanner-emulator/emulator@sha256:66030f526b1bc41f0d2027b496fd8fa53f620bf9d5a18baa07990e67f1a20237                               46 seconds ago       Running             cloud-spanner-emulator                   0                   485c819e3059e       cloud-spanner-emulator-86bd5cbb97-w5gdv     default
	b929bd44832f6       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               49 seconds ago       Running             minikube-ingress-dns                     0                   174e82481266f       kube-ingress-dns-minikube                   kube-system
	ef3b3e7a48948       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             54 seconds ago       Running             coredns                                  0                   d89302dfd9c10       coredns-66bc5c9577-2p98v                    kube-system
	8d61047c5353c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             54 seconds ago       Running             storage-provisioner                      0                   120ad84aa01c8       storage-provisioner                         kube-system
	91694d16fb2b0       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             About a minute ago   Running             kube-proxy                               0                   df4eef4219e62       kube-proxy-zp9mk                            kube-system
	00766cde13982       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             About a minute ago   Running             kindnet-cni                              0                   da6ada4832f1e       kindnet-qbbdx                               kube-system
	49e88f9620ecb       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             About a minute ago   Running             etcd                                     0                   0b4fd5fb5b1fa       etcd-addons-557770                          kube-system
	64ffa4d775be3       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             About a minute ago   Running             kube-scheduler                           0                   4947e309d55fc       kube-scheduler-addons-557770                kube-system
	7ea12626c6ada       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             About a minute ago   Running             kube-apiserver                           0                   237780f5ae90b       kube-apiserver-addons-557770                kube-system
	75207afa634f8       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             About a minute ago   Running             kube-controller-manager                  0                   0c2ca08606766       kube-controller-manager-addons-557770       kube-system
	
	
	==> coredns [ef3b3e7a4894884f7c8bbd755c56a7700dc943a7928aa0e52f4dbdc81806fd1b] <==
	[INFO] 10.244.0.14:37944 - 491 "A IN registry.kube-system.svc.cluster.local.local. udp 62 false 512" NXDOMAIN qr,rd,ra 62 0.004199506s
	[INFO] 10.244.0.14:60204 - 63449 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000125577s
	[INFO] 10.244.0.14:60204 - 63096 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000160416s
	[INFO] 10.244.0.14:41060 - 52821 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000074298s
	[INFO] 10.244.0.14:41060 - 53063 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000113648s
	[INFO] 10.244.0.14:48157 - 65128 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000075631s
	[INFO] 10.244.0.14:48157 - 43 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000112782s
	[INFO] 10.244.0.14:57646 - 23954 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000138255s
	[INFO] 10.244.0.14:57646 - 23738 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000181076s
	[INFO] 10.244.0.22:39948 - 24828 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000177635s
	[INFO] 10.244.0.22:60334 - 63680 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000250817s
	[INFO] 10.244.0.22:57228 - 8289 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000130028s
	[INFO] 10.244.0.22:34939 - 31964 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000177134s
	[INFO] 10.244.0.22:60932 - 37287 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000100465s
	[INFO] 10.244.0.22:44172 - 41255 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000147696s
	[INFO] 10.244.0.22:37960 - 36328 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.003300553s
	[INFO] 10.244.0.22:43453 - 27961 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.003542239s
	[INFO] 10.244.0.22:51062 - 5109 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.00471519s
	[INFO] 10.244.0.22:49148 - 923 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.00639971s
	[INFO] 10.244.0.22:57439 - 64337 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005384593s
	[INFO] 10.244.0.22:48502 - 18987 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005582581s
	[INFO] 10.244.0.22:34854 - 63633 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005324608s
	[INFO] 10.244.0.22:36136 - 60755 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005668887s
	[INFO] 10.244.0.22:49890 - 34889 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001155391s
	[INFO] 10.244.0.22:43808 - 52476 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002546802s
	
	
	==> describe nodes <==
	Name:               addons-557770
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-557770
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34
	                    minikube.k8s.io/name=addons-557770
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T16_21_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-557770
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-557770"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 16:21:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-557770
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 16:22:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 16:22:48 +0000   Sun, 19 Oct 2025 16:21:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 16:22:48 +0000   Sun, 19 Oct 2025 16:21:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 16:22:48 +0000   Sun, 19 Oct 2025 16:21:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 16:22:48 +0000   Sun, 19 Oct 2025 16:22:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-557770
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                2172c73a-e4ea-49ca-bef8-694dddc2eb52
	  Boot ID:                    74c6737e-3c93-45ee-b810-4373d2d70b9b
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  default                     cloud-spanner-emulator-86bd5cbb97-w5gdv                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  gadget                      gadget-jpd5t                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  gcp-auth                    gcp-auth-78565c9fb4-d8qwj                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-jcfrv                     100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         95s
	  kube-system                 amd-gpu-device-plugin-66kws                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 coredns-66bc5c9577-2p98v                                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     97s
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 csi-hostpathplugin-vvt5x                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 etcd-addons-557770                                            100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         102s
	  kube-system                 kindnet-qbbdx                                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      97s
	  kube-system                 kube-apiserver-addons-557770                                  250m (3%)     0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 kube-controller-manager-addons-557770                         200m (2%)     0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 kube-proxy-zp9mk                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 kube-scheduler-addons-557770                                  100m (1%)     0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 metrics-server-85b7d694d7-6qb49                               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         95s
	  kube-system                 nvidia-device-plugin-daemonset-5d5sr                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 registry-6b586f9694-fcnms                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 registry-creds-764b6fb674-9zcvj                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 registry-proxy-cbqn4                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 snapshot-controller-7d9fbc56b8-7w96c                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 snapshot-controller-7d9fbc56b8-g8g8j                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  local-path-storage          helper-pod-create-pvc-af712dd3-5e9b-4878-a6fc-b8b4a8617cea    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  local-path-storage          local-path-provisioner-648f6765c9-gsfrf                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-g4ltn                                0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     95s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 95s   kube-proxy       
	  Normal  Starting                 102s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  102s  kubelet          Node addons-557770 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    102s  kubelet          Node addons-557770 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     102s  kubelet          Node addons-557770 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           98s   node-controller  Node addons-557770 event: Registered Node addons-557770 in Controller
	  Normal  NodeReady                56s   kubelet          Node addons-557770 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct19 16:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001862] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001003] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.093011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.445697] i8042: Warning: Keylock active
	[  +0.012030] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004804] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000958] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000971] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000918] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.001227] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.001085] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.001141] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.001189] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.001040] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.536904] block sda: the capability attribute has been deprecated.
	[  +0.102617] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027869] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.458770] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [49e88f9620ecba4c70cd7b60c353fac9c19df294f9e1cedfb43e5e9d57bb6468] <==
	{"level":"warn","ts":"2025-10-19T16:21:24.271159Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:21:50.671790Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:21:50.678156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:21:50.694527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:21:50.700957Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40892","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-19T16:22:27.314619Z","caller":"traceutil/trace.go:172","msg":"trace[27266233] transaction","detail":"{read_only:false; response_revision:1094; number_of_response:1; }","duration":"100.68502ms","start":"2025-10-19T16:22:27.213917Z","end":"2025-10-19T16:22:27.314602Z","steps":["trace[27266233] 'process raft request'  (duration: 100.554636ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T16:22:29.470164Z","caller":"traceutil/trace.go:172","msg":"trace[496259861] linearizableReadLoop","detail":"{readStateIndex:1125; appliedIndex:1125; }","duration":"123.953446ms","start":"2025-10-19T16:22:29.346194Z","end":"2025-10-19T16:22:29.470148Z","steps":["trace[496259861] 'read index received'  (duration: 123.947823ms)","trace[496259861] 'applied index is now lower than readState.Index'  (duration: 4.932µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-19T16:22:29.470286Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"124.071233ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-19T16:22:29.470347Z","caller":"traceutil/trace.go:172","msg":"trace[525323882] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1095; }","duration":"124.154911ms","start":"2025-10-19T16:22:29.346184Z","end":"2025-10-19T16:22:29.470339Z","steps":["trace[525323882] 'agreement among raft nodes before linearized reading'  (duration: 124.045209ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-19T16:22:29.470390Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"124.160867ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-19T16:22:29.470439Z","caller":"traceutil/trace.go:172","msg":"trace[700802004] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1096; }","duration":"124.217446ms","start":"2025-10-19T16:22:29.346211Z","end":"2025-10-19T16:22:29.470429Z","steps":["trace[700802004] 'agreement among raft nodes before linearized reading'  (duration: 124.141555ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T16:22:29.470435Z","caller":"traceutil/trace.go:172","msg":"trace[1970042286] transaction","detail":"{read_only:false; response_revision:1096; number_of_response:1; }","duration":"142.154101ms","start":"2025-10-19T16:22:29.328269Z","end":"2025-10-19T16:22:29.470424Z","steps":["trace[1970042286] 'process raft request'  (duration: 141.959559ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-19T16:22:32.028513Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"182.141781ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-10-19T16:22:32.028543Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"182.172698ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-19T16:22:32.028590Z","caller":"traceutil/trace.go:172","msg":"trace[951143562] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1110; }","duration":"182.229461ms","start":"2025-10-19T16:22:31.846342Z","end":"2025-10-19T16:22:32.028571Z","steps":["trace[951143562] 'range keys from in-memory index tree'  (duration: 182.070344ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T16:22:32.028592Z","caller":"traceutil/trace.go:172","msg":"trace[985431473] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1110; }","duration":"182.22521ms","start":"2025-10-19T16:22:31.846355Z","end":"2025-10-19T16:22:32.028581Z","steps":["trace[985431473] 'range keys from in-memory index tree'  (duration: 182.114777ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T16:22:35.542402Z","caller":"traceutil/trace.go:172","msg":"trace[2115973208] linearizableReadLoop","detail":"{readStateIndex:1156; appliedIndex:1156; }","duration":"103.454415ms","start":"2025-10-19T16:22:35.438918Z","end":"2025-10-19T16:22:35.542373Z","steps":["trace[2115973208] 'read index received'  (duration: 103.448276ms)","trace[2115973208] 'applied index is now lower than readState.Index'  (duration: 5.302µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-19T16:22:35.542574Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"103.633387ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-19T16:22:35.542605Z","caller":"traceutil/trace.go:172","msg":"trace[798378256] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1124; }","duration":"103.683562ms","start":"2025-10-19T16:22:35.438914Z","end":"2025-10-19T16:22:35.542598Z","steps":["trace[798378256] 'agreement among raft nodes before linearized reading'  (duration: 103.551957ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T16:22:35.542670Z","caller":"traceutil/trace.go:172","msg":"trace[1300325174] transaction","detail":"{read_only:false; response_revision:1125; number_of_response:1; }","duration":"149.169408ms","start":"2025-10-19T16:22:35.393482Z","end":"2025-10-19T16:22:35.542651Z","steps":["trace[1300325174] 'process raft request'  (duration: 149.06006ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T16:22:35.562347Z","caller":"traceutil/trace.go:172","msg":"trace[270939368] transaction","detail":"{read_only:false; response_revision:1126; number_of_response:1; }","duration":"165.856008ms","start":"2025-10-19T16:22:35.396471Z","end":"2025-10-19T16:22:35.562327Z","steps":["trace[270939368] 'process raft request'  (duration: 165.745006ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-19T16:22:48.141167Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"179.364393ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-19T16:22:48.141247Z","caller":"traceutil/trace.go:172","msg":"trace[281555488] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1207; }","duration":"179.461896ms","start":"2025-10-19T16:22:47.961770Z","end":"2025-10-19T16:22:48.141232Z","steps":["trace[281555488] 'agreement among raft nodes before linearized reading'  (duration: 58.975108ms)","trace[281555488] 'range keys from in-memory index tree'  (duration: 120.347983ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-19T16:22:48.141903Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"120.540697ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128040740583415926 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1201 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-19T16:22:48.141991Z","caller":"traceutil/trace.go:172","msg":"trace[1094269978] transaction","detail":"{read_only:false; response_revision:1208; number_of_response:1; }","duration":"253.013031ms","start":"2025-10-19T16:22:47.888961Z","end":"2025-10-19T16:22:48.141974Z","steps":["trace[1094269978] 'process raft request'  (duration: 131.794385ms)","trace[1094269978] 'compare'  (duration: 120.44973ms)"],"step_count":2}
	
	
	==> gcp-auth [efa983b0c093864ff02f6d7eca25b115176c595c66ca518fe543533c863e46ce] <==
	2025/10/19 16:22:42 GCP Auth Webhook started!
	2025/10/19 16:22:48 Ready to marshal response ...
	2025/10/19 16:22:48 Ready to write response ...
	2025/10/19 16:22:49 Ready to marshal response ...
	2025/10/19 16:22:49 Ready to write response ...
	2025/10/19 16:22:49 Ready to marshal response ...
	2025/10/19 16:22:49 Ready to write response ...
	2025/10/19 16:22:56 Ready to marshal response ...
	2025/10/19 16:22:56 Ready to write response ...
	2025/10/19 16:22:56 Ready to marshal response ...
	2025/10/19 16:22:56 Ready to write response ...
	
	
	==> kernel <==
	 16:22:58 up 5 min,  0 user,  load average: 2.64, 1.13, 0.43
	Linux addons-557770 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [00766cde139826cec0b8b6d979bc06db1a8e4c72cc21abb45e687eb3efc6283c] <==
	I1019 16:21:22.474500       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T16:21:22Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 16:21:22.791652       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 16:21:22.791776       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 16:21:22.791795       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 16:21:22.793448       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1019 16:21:52.791986       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1019 16:21:52.792117       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1019 16:21:52.794262       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1019 16:21:52.869809       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1019 16:21:54.191992       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 16:21:54.192019       1 metrics.go:72] Registering metrics
	I1019 16:21:54.192127       1 controller.go:711] "Syncing nftables rules"
	I1019 16:22:02.795142       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:22:02.795210       1 main.go:301] handling current node
	I1019 16:22:12.790642       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:22:12.790685       1 main.go:301] handling current node
	I1019 16:22:22.790355       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:22:22.790397       1 main.go:301] handling current node
	I1019 16:22:32.790497       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:22:32.790543       1 main.go:301] handling current node
	I1019 16:22:42.790376       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:22:42.790434       1 main.go:301] handling current node
	I1019 16:22:52.790314       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:22:52.790372       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7ea12626c6ada213850e1cea66bee9586ca3b56572fe37a2f1d96af64e737eb5] <==
	I1019 16:21:30.052412       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.110.179.250"}
	W1019 16:21:50.671714       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1019 16:21:50.678137       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1019 16:21:50.694470       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1019 16:21:50.700949       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1019 16:22:03.015850       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.110.179.250:443: connect: connection refused
	E1019 16:22:03.015895       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.110.179.250:443: connect: connection refused" logger="UnhandledError"
	W1019 16:22:03.015930       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.110.179.250:443: connect: connection refused
	E1019 16:22:03.015957       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.110.179.250:443: connect: connection refused" logger="UnhandledError"
	W1019 16:22:03.036140       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.110.179.250:443: connect: connection refused
	E1019 16:22:03.036179       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.110.179.250:443: connect: connection refused" logger="UnhandledError"
	W1019 16:22:03.038881       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.110.179.250:443: connect: connection refused
	E1019 16:22:03.039004       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.110.179.250:443: connect: connection refused" logger="UnhandledError"
	W1019 16:22:14.312054       1 handler_proxy.go:99] no RequestInfo found in the context
	E1019 16:22:14.312054       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.253.228:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.253.228:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.253.228:443: connect: connection refused" logger="UnhandledError"
	E1019 16:22:14.312147       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1019 16:22:14.312499       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.253.228:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.253.228:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.253.228:443: connect: connection refused" logger="UnhandledError"
	E1019 16:22:14.318300       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.253.228:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.253.228:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.253.228:443: connect: connection refused" logger="UnhandledError"
	E1019 16:22:14.338957       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.253.228:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.253.228:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.253.228:443: connect: connection refused" logger="UnhandledError"
	I1019 16:22:14.409745       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1019 16:22:56.191227       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:37224: use of closed network connection
	E1019 16:22:56.348933       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:37266: use of closed network connection
	
	
	==> kube-controller-manager [75207afa634f810f84239fed0d54861fdc8bcb8c5f5a1e05bb98430d84584d65] <==
	I1019 16:21:20.655476       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1019 16:21:20.655544       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-557770"
	I1019 16:21:20.655546       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1019 16:21:20.655600       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1019 16:21:20.655791       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1019 16:21:20.655876       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1019 16:21:20.655886       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1019 16:21:20.655945       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1019 16:21:20.656404       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1019 16:21:20.656430       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1019 16:21:20.656492       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1019 16:21:20.656510       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1019 16:21:20.658843       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1019 16:21:20.660965       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 16:21:20.664524       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1019 16:21:20.676777       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1019 16:21:23.163671       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1019 16:21:50.666232       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1019 16:21:50.666343       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1019 16:21:50.666383       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1019 16:21:50.685739       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1019 16:21:50.689163       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1019 16:21:50.766679       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 16:21:50.790054       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 16:22:05.660764       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [91694d16fb2b0ece4c3071344d073973e3e529ad5d23e63ffcbb1aa13844606e] <==
	I1019 16:21:22.385790       1 server_linux.go:53] "Using iptables proxy"
	I1019 16:21:22.694681       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 16:21:22.797483       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 16:21:22.798166       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1019 16:21:22.800178       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 16:21:22.999701       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 16:21:22.999781       1 server_linux.go:132] "Using iptables Proxier"
	I1019 16:21:23.016745       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 16:21:23.026386       1 server.go:527] "Version info" version="v1.34.1"
	I1019 16:21:23.026770       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 16:21:23.037478       1 config.go:200] "Starting service config controller"
	I1019 16:21:23.037558       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 16:21:23.037606       1 config.go:106] "Starting endpoint slice config controller"
	I1019 16:21:23.037630       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 16:21:23.037663       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 16:21:23.037686       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 16:21:23.040175       1 config.go:309] "Starting node config controller"
	I1019 16:21:23.040257       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 16:21:23.040288       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 16:21:23.137786       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 16:21:23.138049       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 16:21:23.138361       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [64ffa4d775be3d28668524f2c7dd8e1c6ad5eeba22fb96d610f66530c5502bfd] <==
	E1019 16:21:13.680519       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1019 16:21:13.680648       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1019 16:21:13.680692       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1019 16:21:13.680823       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 16:21:13.680953       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1019 16:21:13.681052       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1019 16:21:13.681096       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1019 16:21:14.485340       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1019 16:21:14.487591       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 16:21:14.557104       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1019 16:21:14.560526       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1019 16:21:14.597605       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1019 16:21:14.601876       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1019 16:21:14.684886       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1019 16:21:14.688554       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1019 16:21:14.759790       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1019 16:21:14.785614       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1019 16:21:14.801798       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1019 16:21:14.816953       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1019 16:21:14.905323       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1019 16:21:14.909235       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1019 16:21:14.911125       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1019 16:21:14.935228       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1019 16:21:14.957224       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1019 16:21:16.777723       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 19 16:22:24 addons-557770 kubelet[1306]: I1019 16:22:24.642552    1306 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8snjm\" (UniqueName: \"kubernetes.io/projected/edd27a72-4266-49e1-a8e6-aa52dc424d90-kube-api-access-8snjm\") on node \"addons-557770\" DevicePath \"\""
	Oct 19 16:22:25 addons-557770 kubelet[1306]: I1019 16:22:25.346646    1306 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f5a41b680963a3c331b9296ee3a9471451ae2413293a4198e7154e577bff1c9"
	Oct 19 16:22:31 addons-557770 kubelet[1306]: I1019 16:22:31.371769    1306 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-5d5sr" secret="" err="secret \"gcp-auth\" not found"
	Oct 19 16:22:32 addons-557770 kubelet[1306]: I1019 16:22:32.376251    1306 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-5d5sr" secret="" err="secret \"gcp-auth\" not found"
	Oct 19 16:22:34 addons-557770 kubelet[1306]: I1019 16:22:34.386816    1306 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-cbqn4" secret="" err="secret \"gcp-auth\" not found"
	Oct 19 16:22:34 addons-557770 kubelet[1306]: I1019 16:22:34.399889    1306 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/nvidia-device-plugin-daemonset-5d5sr" podStartSLOduration=4.041154245 podStartE2EDuration="31.3998644s" podCreationTimestamp="2025-10-19 16:22:03 +0000 UTC" firstStartedPulling="2025-10-19 16:22:03.477810861 +0000 UTC m=+47.465151932" lastFinishedPulling="2025-10-19 16:22:30.836520998 +0000 UTC m=+74.823862087" observedRunningTime="2025-10-19 16:22:31.385613517 +0000 UTC m=+75.372954630" watchObservedRunningTime="2025-10-19 16:22:34.3998644 +0000 UTC m=+78.387205488"
	Oct 19 16:22:34 addons-557770 kubelet[1306]: I1019 16:22:34.400084    1306 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-cbqn4" podStartSLOduration=1.359200588 podStartE2EDuration="31.400049069s" podCreationTimestamp="2025-10-19 16:22:03 +0000 UTC" firstStartedPulling="2025-10-19 16:22:03.492136757 +0000 UTC m=+47.479477830" lastFinishedPulling="2025-10-19 16:22:33.53298522 +0000 UTC m=+77.520326311" observedRunningTime="2025-10-19 16:22:34.399381274 +0000 UTC m=+78.386722391" watchObservedRunningTime="2025-10-19 16:22:34.400049069 +0000 UTC m=+78.387390174"
	Oct 19 16:22:34 addons-557770 kubelet[1306]: E1019 16:22:34.936314    1306 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Oct 19 16:22:34 addons-557770 kubelet[1306]: E1019 16:22:34.936424    1306 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae105e8b-c740-4d2e-8cbf-ac8ec523125c-gcr-creds podName:ae105e8b-c740-4d2e-8cbf-ac8ec523125c nodeName:}" failed. No retries permitted until 2025-10-19 16:23:06.936399515 +0000 UTC m=+110.923740606 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/ae105e8b-c740-4d2e-8cbf-ac8ec523125c-gcr-creds") pod "registry-creds-764b6fb674-9zcvj" (UID: "ae105e8b-c740-4d2e-8cbf-ac8ec523125c") : secret "registry-creds-gcr" not found
	Oct 19 16:22:35 addons-557770 kubelet[1306]: I1019 16:22:35.390534    1306 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-cbqn4" secret="" err="secret \"gcp-auth\" not found"
	Oct 19 16:22:36 addons-557770 kubelet[1306]: I1019 16:22:36.409014    1306 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-jpd5t" podStartSLOduration=67.048668458 podStartE2EDuration="1m13.408997017s" podCreationTimestamp="2025-10-19 16:21:23 +0000 UTC" firstStartedPulling="2025-10-19 16:22:29.828537216 +0000 UTC m=+73.815878299" lastFinishedPulling="2025-10-19 16:22:36.18886578 +0000 UTC m=+80.176206858" observedRunningTime="2025-10-19 16:22:36.407978773 +0000 UTC m=+80.395319866" watchObservedRunningTime="2025-10-19 16:22:36.408997017 +0000 UTC m=+80.396338110"
	Oct 19 16:22:41 addons-557770 kubelet[1306]: I1019 16:22:41.443489    1306 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-675c5ddd98-jcfrv" podStartSLOduration=74.012682579 podStartE2EDuration="1m18.443467156s" podCreationTimestamp="2025-10-19 16:21:23 +0000 UTC" firstStartedPulling="2025-10-19 16:22:36.141870666 +0000 UTC m=+80.129211739" lastFinishedPulling="2025-10-19 16:22:40.572655231 +0000 UTC m=+84.559996316" observedRunningTime="2025-10-19 16:22:41.443299938 +0000 UTC m=+85.430641050" watchObservedRunningTime="2025-10-19 16:22:41.443467156 +0000 UTC m=+85.430808257"
	Oct 19 16:22:44 addons-557770 kubelet[1306]: I1019 16:22:44.161744    1306 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Oct 19 16:22:44 addons-557770 kubelet[1306]: I1019 16:22:44.161788    1306 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Oct 19 16:22:46 addons-557770 kubelet[1306]: I1019 16:22:46.473027    1306 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-d8qwj" podStartSLOduration=70.292193499 podStartE2EDuration="1m16.473005983s" podCreationTimestamp="2025-10-19 16:21:30 +0000 UTC" firstStartedPulling="2025-10-19 16:22:36.142157655 +0000 UTC m=+80.129498730" lastFinishedPulling="2025-10-19 16:22:42.322970129 +0000 UTC m=+86.310311214" observedRunningTime="2025-10-19 16:22:42.444726 +0000 UTC m=+86.432067115" watchObservedRunningTime="2025-10-19 16:22:46.473005983 +0000 UTC m=+90.460347076"
	Oct 19 16:22:46 addons-557770 kubelet[1306]: I1019 16:22:46.473561    1306 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-vvt5x" podStartSLOduration=1.137334142 podStartE2EDuration="43.473549877s" podCreationTimestamp="2025-10-19 16:22:03 +0000 UTC" firstStartedPulling="2025-10-19 16:22:03.48241047 +0000 UTC m=+47.469751544" lastFinishedPulling="2025-10-19 16:22:45.818626207 +0000 UTC m=+89.805967279" observedRunningTime="2025-10-19 16:22:46.472571157 +0000 UTC m=+90.459912283" watchObservedRunningTime="2025-10-19 16:22:46.473549877 +0000 UTC m=+90.460890971"
	Oct 19 16:22:49 addons-557770 kubelet[1306]: I1019 16:22:49.149869    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftbcm\" (UniqueName: \"kubernetes.io/projected/79a54fb4-2085-4e70-bc23-ee183a0b45cd-kube-api-access-ftbcm\") pod \"busybox\" (UID: \"79a54fb4-2085-4e70-bc23-ee183a0b45cd\") " pod="default/busybox"
	Oct 19 16:22:49 addons-557770 kubelet[1306]: I1019 16:22:49.149977    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/79a54fb4-2085-4e70-bc23-ee183a0b45cd-gcp-creds\") pod \"busybox\" (UID: \"79a54fb4-2085-4e70-bc23-ee183a0b45cd\") " pod="default/busybox"
	Oct 19 16:22:50 addons-557770 kubelet[1306]: I1019 16:22:50.492143    1306 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=0.835392121 podStartE2EDuration="1.492121557s" podCreationTimestamp="2025-10-19 16:22:49 +0000 UTC" firstStartedPulling="2025-10-19 16:22:49.363946357 +0000 UTC m=+93.351287429" lastFinishedPulling="2025-10-19 16:22:50.020675779 +0000 UTC m=+94.008016865" observedRunningTime="2025-10-19 16:22:50.49134137 +0000 UTC m=+94.478682463" watchObservedRunningTime="2025-10-19 16:22:50.492121557 +0000 UTC m=+94.479462650"
	Oct 19 16:22:52 addons-557770 kubelet[1306]: I1019 16:22:52.098630    1306 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7c77956-0981-481e-912e-93621839826d" path="/var/lib/kubelet/pods/f7c77956-0981-481e-912e-93621839826d/volumes"
	Oct 19 16:22:56 addons-557770 kubelet[1306]: I1019 16:22:56.098871    1306 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edd27a72-4266-49e1-a8e6-aa52dc424d90" path="/var/lib/kubelet/pods/edd27a72-4266-49e1-a8e6-aa52dc424d90/volumes"
	Oct 19 16:22:56 addons-557770 kubelet[1306]: I1019 16:22:56.910442    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/abbae647-508f-451d-a8ba-c9adc7bfedec-data\") pod \"helper-pod-create-pvc-af712dd3-5e9b-4878-a6fc-b8b4a8617cea\" (UID: \"abbae647-508f-451d-a8ba-c9adc7bfedec\") " pod="local-path-storage/helper-pod-create-pvc-af712dd3-5e9b-4878-a6fc-b8b4a8617cea"
	Oct 19 16:22:56 addons-557770 kubelet[1306]: I1019 16:22:56.910513    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/abbae647-508f-451d-a8ba-c9adc7bfedec-script\") pod \"helper-pod-create-pvc-af712dd3-5e9b-4878-a6fc-b8b4a8617cea\" (UID: \"abbae647-508f-451d-a8ba-c9adc7bfedec\") " pod="local-path-storage/helper-pod-create-pvc-af712dd3-5e9b-4878-a6fc-b8b4a8617cea"
	Oct 19 16:22:56 addons-557770 kubelet[1306]: I1019 16:22:56.910552    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4h8d\" (UniqueName: \"kubernetes.io/projected/abbae647-508f-451d-a8ba-c9adc7bfedec-kube-api-access-g4h8d\") pod \"helper-pod-create-pvc-af712dd3-5e9b-4878-a6fc-b8b4a8617cea\" (UID: \"abbae647-508f-451d-a8ba-c9adc7bfedec\") " pod="local-path-storage/helper-pod-create-pvc-af712dd3-5e9b-4878-a6fc-b8b4a8617cea"
	Oct 19 16:22:56 addons-557770 kubelet[1306]: I1019 16:22:56.910693    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/abbae647-508f-451d-a8ba-c9adc7bfedec-gcp-creds\") pod \"helper-pod-create-pvc-af712dd3-5e9b-4878-a6fc-b8b4a8617cea\" (UID: \"abbae647-508f-451d-a8ba-c9adc7bfedec\") " pod="local-path-storage/helper-pod-create-pvc-af712dd3-5e9b-4878-a6fc-b8b4a8617cea"
	
	
	==> storage-provisioner [8d61047c5353c2c688e2f172eaaf6ad4bd6c3816d4a19f2de39251e386867693] <==
	W1019 16:22:33.808559       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:22:35.811685       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:22:35.815715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:22:37.818847       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:22:37.825060       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:22:39.828327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:22:39.832848       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:22:41.836295       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:22:41.840703       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:22:43.844686       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:22:43.848903       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:22:45.851356       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:22:45.855516       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:22:47.886463       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:22:48.142907       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:22:50.146935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:22:50.152349       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:22:52.155576       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:22:52.159853       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:22:54.162847       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:22:54.166841       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:22:56.169582       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:22:56.176199       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:22:58.179619       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:22:58.183723       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-557770 -n addons-557770
helpers_test.go:269: (dbg) Run:  kubectl --context addons-557770 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: test-local-path ingress-nginx-admission-create-7tns9 ingress-nginx-admission-patch-kb26q registry-creds-764b6fb674-9zcvj helper-pod-create-pvc-af712dd3-5e9b-4878-a6fc-b8b4a8617cea
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-557770 describe pod test-local-path ingress-nginx-admission-create-7tns9 ingress-nginx-admission-patch-kb26q registry-creds-764b6fb674-9zcvj helper-pod-create-pvc-af712dd3-5e9b-4878-a6fc-b8b4a8617cea
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-557770 describe pod test-local-path ingress-nginx-admission-create-7tns9 ingress-nginx-admission-patch-kb26q registry-creds-764b6fb674-9zcvj helper-pod-create-pvc-af712dd3-5e9b-4878-a6fc-b8b4a8617cea: exit status 1 (70.533957ms)

                                                
                                                
-- stdout --
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9bst8 (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-9bst8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:            <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-7tns9" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-kb26q" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-9zcvj" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-af712dd3-5e9b-4878-a6fc-b8b4a8617cea" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-557770 describe pod test-local-path ingress-nginx-admission-create-7tns9 ingress-nginx-admission-patch-kb26q registry-creds-764b6fb674-9zcvj helper-pod-create-pvc-af712dd3-5e9b-4878-a6fc-b8b4a8617cea: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-557770 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-557770 addons disable headlamp --alsologtostderr -v=1: exit status 11 (237.506939ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 16:22:59.066912   18196 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:22:59.067085   18196 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:22:59.067097   18196 out.go:374] Setting ErrFile to fd 2...
	I1019 16:22:59.067103   18196 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:22:59.067322   18196 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3731/.minikube/bin
	I1019 16:22:59.067625   18196 mustload.go:66] Loading cluster: addons-557770
	I1019 16:22:59.067960   18196 config.go:182] Loaded profile config "addons-557770": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:22:59.067973   18196 addons.go:607] checking whether the cluster is paused
	I1019 16:22:59.068050   18196 config.go:182] Loaded profile config "addons-557770": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:22:59.068061   18196 host.go:66] Checking if "addons-557770" exists ...
	I1019 16:22:59.068501   18196 cli_runner.go:164] Run: docker container inspect addons-557770 --format={{.State.Status}}
	I1019 16:22:59.087224   18196 ssh_runner.go:195] Run: systemctl --version
	I1019 16:22:59.087276   18196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:22:59.106495   18196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/addons-557770/id_rsa Username:docker}
	I1019 16:22:59.201922   18196 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 16:22:59.202024   18196 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 16:22:59.233157   18196 cri.go:89] found id: "9d790d16e895919a1170fc8db9b5e7568e08763e98bd7e71211780308a212097"
	I1019 16:22:59.233195   18196 cri.go:89] found id: "5888ac56628ff93145c6aece5f1120cd8eb8decc9375b9ca12c2230ec4c1defb"
	I1019 16:22:59.233199   18196 cri.go:89] found id: "50a236ad43b3d311660fac7a7b4f30d2e7c648b9652ad78e1ba950060276f923"
	I1019 16:22:59.233202   18196 cri.go:89] found id: "05400eb2fd5eb5dab037918d1625bce04346352c5eb622cb825bc5d8be2d837e"
	I1019 16:22:59.233205   18196 cri.go:89] found id: "2e93d36349d27da10dcd3ea12fffbcf50506c87892ebdc8c2ea984ea7cc55440"
	I1019 16:22:59.233209   18196 cri.go:89] found id: "c503d7edb96f3d0cc1492dbef9c424367ec8ca4b0115a5c993d0ff8d12eedf93"
	I1019 16:22:59.233211   18196 cri.go:89] found id: "bfa49cfea4019265b1d3841ebbabe50ea8270a753eb9f5e68565515d284992d5"
	I1019 16:22:59.233214   18196 cri.go:89] found id: "aeccc8f632779e72d498e4e2c918b7701815fd1595f5123cba0d3e3306a5b8fa"
	I1019 16:22:59.233216   18196 cri.go:89] found id: "d3273937d7efc2090dbf92878ee357891bac8e1fd1ac928ff06fa33207f8acd6"
	I1019 16:22:59.233225   18196 cri.go:89] found id: "6ef8a71fe5d3990e63bf621adc1773300aea754a4ae4ba88cb908af6033475fc"
	I1019 16:22:59.233228   18196 cri.go:89] found id: "6e2091eec84de97496ef71ec05866a5e84fc72136ebd8fba19aae91efa63d9e3"
	I1019 16:22:59.233231   18196 cri.go:89] found id: "037c0332147ce352c19f75a68693f361440e2fb38139735ad856f224c9190c1f"
	I1019 16:22:59.233233   18196 cri.go:89] found id: "65863d6eb03db19db922d886803cf53a0cb4c1aa50b354e37b1d7abdbe4cd53e"
	I1019 16:22:59.233236   18196 cri.go:89] found id: "28dca4a98797d39e429fa32ded7e16d2d4d1e675523de1c726f9a2abb9cf1ec4"
	I1019 16:22:59.233238   18196 cri.go:89] found id: "893241a14d701c6bdcba867e2fd1fe8a4cd5738e5119b7e04c7002b291849f9d"
	I1019 16:22:59.233249   18196 cri.go:89] found id: "b929bd44832f67814ec657bdb9e012b7c97cd64985f113692cbc885e53eeb5ed"
	I1019 16:22:59.233256   18196 cri.go:89] found id: "ef3b3e7a4894884f7c8bbd755c56a7700dc943a7928aa0e52f4dbdc81806fd1b"
	I1019 16:22:59.233260   18196 cri.go:89] found id: "8d61047c5353c2c688e2f172eaaf6ad4bd6c3816d4a19f2de39251e386867693"
	I1019 16:22:59.233263   18196 cri.go:89] found id: "91694d16fb2b0ece4c3071344d073973e3e529ad5d23e63ffcbb1aa13844606e"
	I1019 16:22:59.233265   18196 cri.go:89] found id: "00766cde139826cec0b8b6d979bc06db1a8e4c72cc21abb45e687eb3efc6283c"
	I1019 16:22:59.233271   18196 cri.go:89] found id: "49e88f9620ecba4c70cd7b60c353fac9c19df294f9e1cedfb43e5e9d57bb6468"
	I1019 16:22:59.233273   18196 cri.go:89] found id: "64ffa4d775be3d28668524f2c7dd8e1c6ad5eeba22fb96d610f66530c5502bfd"
	I1019 16:22:59.233275   18196 cri.go:89] found id: "7ea12626c6ada213850e1cea66bee9586ca3b56572fe37a2f1d96af64e737eb5"
	I1019 16:22:59.233278   18196 cri.go:89] found id: "75207afa634f810f84239fed0d54861fdc8bcb8c5f5a1e05bb98430d84584d65"
	I1019 16:22:59.233280   18196 cri.go:89] found id: ""
	I1019 16:22:59.233327   18196 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 16:22:59.246851   18196 out.go:203] 
	W1019 16:22:59.248150   18196 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:22:59Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:22:59Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 16:22:59.248178   18196 out.go:285] * 
	* 
	W1019 16:22:59.251159   18196 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 16:22:59.252473   18196 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-557770 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.67s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-w5gdv" [cb3f8846-5136-4b1a-a516-edaac4f96f57] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003414573s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-557770 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-557770 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (255.483211ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 16:23:04.318948   18595 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:23:04.319290   18595 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:23:04.319302   18595 out.go:374] Setting ErrFile to fd 2...
	I1019 16:23:04.319306   18595 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:23:04.319529   18595 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3731/.minikube/bin
	I1019 16:23:04.319829   18595 mustload.go:66] Loading cluster: addons-557770
	I1019 16:23:04.320237   18595 config.go:182] Loaded profile config "addons-557770": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:23:04.320260   18595 addons.go:607] checking whether the cluster is paused
	I1019 16:23:04.320418   18595 config.go:182] Loaded profile config "addons-557770": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:23:04.320436   18595 host.go:66] Checking if "addons-557770" exists ...
	I1019 16:23:04.320862   18595 cli_runner.go:164] Run: docker container inspect addons-557770 --format={{.State.Status}}
	I1019 16:23:04.342111   18595 ssh_runner.go:195] Run: systemctl --version
	I1019 16:23:04.342164   18595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:23:04.364494   18595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/addons-557770/id_rsa Username:docker}
	I1019 16:23:04.463049   18595 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 16:23:04.463160   18595 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 16:23:04.496567   18595 cri.go:89] found id: "9d790d16e895919a1170fc8db9b5e7568e08763e98bd7e71211780308a212097"
	I1019 16:23:04.496594   18595 cri.go:89] found id: "5888ac56628ff93145c6aece5f1120cd8eb8decc9375b9ca12c2230ec4c1defb"
	I1019 16:23:04.496600   18595 cri.go:89] found id: "50a236ad43b3d311660fac7a7b4f30d2e7c648b9652ad78e1ba950060276f923"
	I1019 16:23:04.496605   18595 cri.go:89] found id: "05400eb2fd5eb5dab037918d1625bce04346352c5eb622cb825bc5d8be2d837e"
	I1019 16:23:04.496609   18595 cri.go:89] found id: "2e93d36349d27da10dcd3ea12fffbcf50506c87892ebdc8c2ea984ea7cc55440"
	I1019 16:23:04.496615   18595 cri.go:89] found id: "c503d7edb96f3d0cc1492dbef9c424367ec8ca4b0115a5c993d0ff8d12eedf93"
	I1019 16:23:04.496618   18595 cri.go:89] found id: "bfa49cfea4019265b1d3841ebbabe50ea8270a753eb9f5e68565515d284992d5"
	I1019 16:23:04.496623   18595 cri.go:89] found id: "aeccc8f632779e72d498e4e2c918b7701815fd1595f5123cba0d3e3306a5b8fa"
	I1019 16:23:04.496628   18595 cri.go:89] found id: "d3273937d7efc2090dbf92878ee357891bac8e1fd1ac928ff06fa33207f8acd6"
	I1019 16:23:04.496640   18595 cri.go:89] found id: "6ef8a71fe5d3990e63bf621adc1773300aea754a4ae4ba88cb908af6033475fc"
	I1019 16:23:04.496644   18595 cri.go:89] found id: "6e2091eec84de97496ef71ec05866a5e84fc72136ebd8fba19aae91efa63d9e3"
	I1019 16:23:04.496648   18595 cri.go:89] found id: "037c0332147ce352c19f75a68693f361440e2fb38139735ad856f224c9190c1f"
	I1019 16:23:04.496652   18595 cri.go:89] found id: "65863d6eb03db19db922d886803cf53a0cb4c1aa50b354e37b1d7abdbe4cd53e"
	I1019 16:23:04.496657   18595 cri.go:89] found id: "28dca4a98797d39e429fa32ded7e16d2d4d1e675523de1c726f9a2abb9cf1ec4"
	I1019 16:23:04.496661   18595 cri.go:89] found id: "893241a14d701c6bdcba867e2fd1fe8a4cd5738e5119b7e04c7002b291849f9d"
	I1019 16:23:04.496667   18595 cri.go:89] found id: "b929bd44832f67814ec657bdb9e012b7c97cd64985f113692cbc885e53eeb5ed"
	I1019 16:23:04.496670   18595 cri.go:89] found id: "ef3b3e7a4894884f7c8bbd755c56a7700dc943a7928aa0e52f4dbdc81806fd1b"
	I1019 16:23:04.496680   18595 cri.go:89] found id: "8d61047c5353c2c688e2f172eaaf6ad4bd6c3816d4a19f2de39251e386867693"
	I1019 16:23:04.496684   18595 cri.go:89] found id: "91694d16fb2b0ece4c3071344d073973e3e529ad5d23e63ffcbb1aa13844606e"
	I1019 16:23:04.496688   18595 cri.go:89] found id: "00766cde139826cec0b8b6d979bc06db1a8e4c72cc21abb45e687eb3efc6283c"
	I1019 16:23:04.496698   18595 cri.go:89] found id: "49e88f9620ecba4c70cd7b60c353fac9c19df294f9e1cedfb43e5e9d57bb6468"
	I1019 16:23:04.496702   18595 cri.go:89] found id: "64ffa4d775be3d28668524f2c7dd8e1c6ad5eeba22fb96d610f66530c5502bfd"
	I1019 16:23:04.496706   18595 cri.go:89] found id: "7ea12626c6ada213850e1cea66bee9586ca3b56572fe37a2f1d96af64e737eb5"
	I1019 16:23:04.496709   18595 cri.go:89] found id: "75207afa634f810f84239fed0d54861fdc8bcb8c5f5a1e05bb98430d84584d65"
	I1019 16:23:04.496713   18595 cri.go:89] found id: ""
	I1019 16:23:04.496764   18595 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 16:23:04.512457   18595 out.go:203] 
	W1019 16:23:04.513708   18595 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:23:04Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:23:04Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 16:23:04.513726   18595 out.go:285] * 
	* 
	W1019 16:23:04.516743   18595 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 16:23:04.518321   18595 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-557770 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.27s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.12s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-557770 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-557770 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-557770 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-557770 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-557770 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-557770 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-557770 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [265c199c-ac60-4840-b919-1565cb3d010d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [265c199c-ac60-4840-b919-1565cb3d010d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [265c199c-ac60-4840-b919-1565cb3d010d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.002872016s
addons_test.go:967: (dbg) Run:  kubectl --context addons-557770 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-557770 ssh "cat /opt/local-path-provisioner/pvc-af712dd3-5e9b-4878-a6fc-b8b4a8617cea_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-557770 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-557770 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-557770 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-557770 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (247.936346ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 16:23:04.511041   18644 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:23:04.511359   18644 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:23:04.511370   18644 out.go:374] Setting ErrFile to fd 2...
	I1019 16:23:04.511375   18644 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:23:04.511617   18644 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3731/.minikube/bin
	I1019 16:23:04.511921   18644 mustload.go:66] Loading cluster: addons-557770
	I1019 16:23:04.512316   18644 config.go:182] Loaded profile config "addons-557770": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:23:04.512335   18644 addons.go:607] checking whether the cluster is paused
	I1019 16:23:04.512461   18644 config.go:182] Loaded profile config "addons-557770": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:23:04.512479   18644 host.go:66] Checking if "addons-557770" exists ...
	I1019 16:23:04.513054   18644 cli_runner.go:164] Run: docker container inspect addons-557770 --format={{.State.Status}}
	I1019 16:23:04.534546   18644 ssh_runner.go:195] Run: systemctl --version
	I1019 16:23:04.534608   18644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:23:04.555181   18644 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/addons-557770/id_rsa Username:docker}
	I1019 16:23:04.652311   18644 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 16:23:04.652388   18644 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 16:23:04.683457   18644 cri.go:89] found id: "9d790d16e895919a1170fc8db9b5e7568e08763e98bd7e71211780308a212097"
	I1019 16:23:04.683482   18644 cri.go:89] found id: "5888ac56628ff93145c6aece5f1120cd8eb8decc9375b9ca12c2230ec4c1defb"
	I1019 16:23:04.683486   18644 cri.go:89] found id: "50a236ad43b3d311660fac7a7b4f30d2e7c648b9652ad78e1ba950060276f923"
	I1019 16:23:04.683489   18644 cri.go:89] found id: "05400eb2fd5eb5dab037918d1625bce04346352c5eb622cb825bc5d8be2d837e"
	I1019 16:23:04.683492   18644 cri.go:89] found id: "2e93d36349d27da10dcd3ea12fffbcf50506c87892ebdc8c2ea984ea7cc55440"
	I1019 16:23:04.683494   18644 cri.go:89] found id: "c503d7edb96f3d0cc1492dbef9c424367ec8ca4b0115a5c993d0ff8d12eedf93"
	I1019 16:23:04.683497   18644 cri.go:89] found id: "bfa49cfea4019265b1d3841ebbabe50ea8270a753eb9f5e68565515d284992d5"
	I1019 16:23:04.683499   18644 cri.go:89] found id: "aeccc8f632779e72d498e4e2c918b7701815fd1595f5123cba0d3e3306a5b8fa"
	I1019 16:23:04.683502   18644 cri.go:89] found id: "d3273937d7efc2090dbf92878ee357891bac8e1fd1ac928ff06fa33207f8acd6"
	I1019 16:23:04.683522   18644 cri.go:89] found id: "6ef8a71fe5d3990e63bf621adc1773300aea754a4ae4ba88cb908af6033475fc"
	I1019 16:23:04.683527   18644 cri.go:89] found id: "6e2091eec84de97496ef71ec05866a5e84fc72136ebd8fba19aae91efa63d9e3"
	I1019 16:23:04.683531   18644 cri.go:89] found id: "037c0332147ce352c19f75a68693f361440e2fb38139735ad856f224c9190c1f"
	I1019 16:23:04.683534   18644 cri.go:89] found id: "65863d6eb03db19db922d886803cf53a0cb4c1aa50b354e37b1d7abdbe4cd53e"
	I1019 16:23:04.683538   18644 cri.go:89] found id: "28dca4a98797d39e429fa32ded7e16d2d4d1e675523de1c726f9a2abb9cf1ec4"
	I1019 16:23:04.683542   18644 cri.go:89] found id: "893241a14d701c6bdcba867e2fd1fe8a4cd5738e5119b7e04c7002b291849f9d"
	I1019 16:23:04.683562   18644 cri.go:89] found id: "b929bd44832f67814ec657bdb9e012b7c97cd64985f113692cbc885e53eeb5ed"
	I1019 16:23:04.683572   18644 cri.go:89] found id: "ef3b3e7a4894884f7c8bbd755c56a7700dc943a7928aa0e52f4dbdc81806fd1b"
	I1019 16:23:04.683577   18644 cri.go:89] found id: "8d61047c5353c2c688e2f172eaaf6ad4bd6c3816d4a19f2de39251e386867693"
	I1019 16:23:04.683579   18644 cri.go:89] found id: "91694d16fb2b0ece4c3071344d073973e3e529ad5d23e63ffcbb1aa13844606e"
	I1019 16:23:04.683582   18644 cri.go:89] found id: "00766cde139826cec0b8b6d979bc06db1a8e4c72cc21abb45e687eb3efc6283c"
	I1019 16:23:04.683585   18644 cri.go:89] found id: "49e88f9620ecba4c70cd7b60c353fac9c19df294f9e1cedfb43e5e9d57bb6468"
	I1019 16:23:04.683596   18644 cri.go:89] found id: "64ffa4d775be3d28668524f2c7dd8e1c6ad5eeba22fb96d610f66530c5502bfd"
	I1019 16:23:04.683601   18644 cri.go:89] found id: "7ea12626c6ada213850e1cea66bee9586ca3b56572fe37a2f1d96af64e737eb5"
	I1019 16:23:04.683603   18644 cri.go:89] found id: "75207afa634f810f84239fed0d54861fdc8bcb8c5f5a1e05bb98430d84584d65"
	I1019 16:23:04.683606   18644 cri.go:89] found id: ""
	I1019 16:23:04.683652   18644 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 16:23:04.698045   18644 out.go:203] 
	W1019 16:23:04.699723   18644 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:23:04Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:23:04Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 16:23:04.699752   18644 out.go:285] * 
	* 
	W1019 16:23:04.702776   18644 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 16:23:04.704793   18644 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-557770 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.12s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-5d5sr" [b4d3ae84-fa02-4220-af1a-6d1eba3ff1a6] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003595126s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-557770 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-557770 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (241.927703ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 16:23:01.642391   18330 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:23:01.642715   18330 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:23:01.642725   18330 out.go:374] Setting ErrFile to fd 2...
	I1019 16:23:01.642730   18330 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:23:01.643026   18330 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3731/.minikube/bin
	I1019 16:23:01.643326   18330 mustload.go:66] Loading cluster: addons-557770
	I1019 16:23:01.643677   18330 config.go:182] Loaded profile config "addons-557770": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:23:01.643694   18330 addons.go:607] checking whether the cluster is paused
	I1019 16:23:01.643792   18330 config.go:182] Loaded profile config "addons-557770": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:23:01.643808   18330 host.go:66] Checking if "addons-557770" exists ...
	I1019 16:23:01.644256   18330 cli_runner.go:164] Run: docker container inspect addons-557770 --format={{.State.Status}}
	I1019 16:23:01.663031   18330 ssh_runner.go:195] Run: systemctl --version
	I1019 16:23:01.663114   18330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:23:01.682879   18330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/addons-557770/id_rsa Username:docker}
	I1019 16:23:01.778870   18330 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 16:23:01.778962   18330 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 16:23:01.811236   18330 cri.go:89] found id: "9d790d16e895919a1170fc8db9b5e7568e08763e98bd7e71211780308a212097"
	I1019 16:23:01.811260   18330 cri.go:89] found id: "5888ac56628ff93145c6aece5f1120cd8eb8decc9375b9ca12c2230ec4c1defb"
	I1019 16:23:01.811266   18330 cri.go:89] found id: "50a236ad43b3d311660fac7a7b4f30d2e7c648b9652ad78e1ba950060276f923"
	I1019 16:23:01.811271   18330 cri.go:89] found id: "05400eb2fd5eb5dab037918d1625bce04346352c5eb622cb825bc5d8be2d837e"
	I1019 16:23:01.811276   18330 cri.go:89] found id: "2e93d36349d27da10dcd3ea12fffbcf50506c87892ebdc8c2ea984ea7cc55440"
	I1019 16:23:01.811282   18330 cri.go:89] found id: "c503d7edb96f3d0cc1492dbef9c424367ec8ca4b0115a5c993d0ff8d12eedf93"
	I1019 16:23:01.811284   18330 cri.go:89] found id: "bfa49cfea4019265b1d3841ebbabe50ea8270a753eb9f5e68565515d284992d5"
	I1019 16:23:01.811288   18330 cri.go:89] found id: "aeccc8f632779e72d498e4e2c918b7701815fd1595f5123cba0d3e3306a5b8fa"
	I1019 16:23:01.811292   18330 cri.go:89] found id: "d3273937d7efc2090dbf92878ee357891bac8e1fd1ac928ff06fa33207f8acd6"
	I1019 16:23:01.811305   18330 cri.go:89] found id: "6ef8a71fe5d3990e63bf621adc1773300aea754a4ae4ba88cb908af6033475fc"
	I1019 16:23:01.811314   18330 cri.go:89] found id: "6e2091eec84de97496ef71ec05866a5e84fc72136ebd8fba19aae91efa63d9e3"
	I1019 16:23:01.811319   18330 cri.go:89] found id: "037c0332147ce352c19f75a68693f361440e2fb38139735ad856f224c9190c1f"
	I1019 16:23:01.811323   18330 cri.go:89] found id: "65863d6eb03db19db922d886803cf53a0cb4c1aa50b354e37b1d7abdbe4cd53e"
	I1019 16:23:01.811327   18330 cri.go:89] found id: "28dca4a98797d39e429fa32ded7e16d2d4d1e675523de1c726f9a2abb9cf1ec4"
	I1019 16:23:01.811331   18330 cri.go:89] found id: "893241a14d701c6bdcba867e2fd1fe8a4cd5738e5119b7e04c7002b291849f9d"
	I1019 16:23:01.811337   18330 cri.go:89] found id: "b929bd44832f67814ec657bdb9e012b7c97cd64985f113692cbc885e53eeb5ed"
	I1019 16:23:01.811342   18330 cri.go:89] found id: "ef3b3e7a4894884f7c8bbd755c56a7700dc943a7928aa0e52f4dbdc81806fd1b"
	I1019 16:23:01.811348   18330 cri.go:89] found id: "8d61047c5353c2c688e2f172eaaf6ad4bd6c3816d4a19f2de39251e386867693"
	I1019 16:23:01.811352   18330 cri.go:89] found id: "91694d16fb2b0ece4c3071344d073973e3e529ad5d23e63ffcbb1aa13844606e"
	I1019 16:23:01.811356   18330 cri.go:89] found id: "00766cde139826cec0b8b6d979bc06db1a8e4c72cc21abb45e687eb3efc6283c"
	I1019 16:23:01.811360   18330 cri.go:89] found id: "49e88f9620ecba4c70cd7b60c353fac9c19df294f9e1cedfb43e5e9d57bb6468"
	I1019 16:23:01.811363   18330 cri.go:89] found id: "64ffa4d775be3d28668524f2c7dd8e1c6ad5eeba22fb96d610f66530c5502bfd"
	I1019 16:23:01.811367   18330 cri.go:89] found id: "7ea12626c6ada213850e1cea66bee9586ca3b56572fe37a2f1d96af64e737eb5"
	I1019 16:23:01.811369   18330 cri.go:89] found id: "75207afa634f810f84239fed0d54861fdc8bcb8c5f5a1e05bb98430d84584d65"
	I1019 16:23:01.811377   18330 cri.go:89] found id: ""
	I1019 16:23:01.811417   18330 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 16:23:01.827192   18330 out.go:203] 
	W1019 16:23:01.828363   18330 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:23:01Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:23:01Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 16:23:01.828383   18330 out.go:285] * 
	* 
	W1019 16:23:01.832831   18330 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 16:23:01.834130   18330 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-557770 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.25s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.24s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-g4ltn" [1d6e3c32-30cf-4aad-8e2c-0b8601837c6a] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004243224s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-557770 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-557770 addons disable yakd --alsologtostderr -v=1: exit status 11 (238.063627ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 16:23:15.570737   20304 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:23:15.571058   20304 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:23:15.571081   20304 out.go:374] Setting ErrFile to fd 2...
	I1019 16:23:15.571088   20304 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:23:15.571320   20304 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3731/.minikube/bin
	I1019 16:23:15.571634   20304 mustload.go:66] Loading cluster: addons-557770
	I1019 16:23:15.572127   20304 config.go:182] Loaded profile config "addons-557770": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:23:15.572152   20304 addons.go:607] checking whether the cluster is paused
	I1019 16:23:15.572305   20304 config.go:182] Loaded profile config "addons-557770": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:23:15.572327   20304 host.go:66] Checking if "addons-557770" exists ...
	I1019 16:23:15.572885   20304 cli_runner.go:164] Run: docker container inspect addons-557770 --format={{.State.Status}}
	I1019 16:23:15.591723   20304 ssh_runner.go:195] Run: systemctl --version
	I1019 16:23:15.591815   20304 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:23:15.610554   20304 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/addons-557770/id_rsa Username:docker}
	I1019 16:23:15.708002   20304 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 16:23:15.708132   20304 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 16:23:15.738936   20304 cri.go:89] found id: "9d790d16e895919a1170fc8db9b5e7568e08763e98bd7e71211780308a212097"
	I1019 16:23:15.738958   20304 cri.go:89] found id: "5888ac56628ff93145c6aece5f1120cd8eb8decc9375b9ca12c2230ec4c1defb"
	I1019 16:23:15.738961   20304 cri.go:89] found id: "50a236ad43b3d311660fac7a7b4f30d2e7c648b9652ad78e1ba950060276f923"
	I1019 16:23:15.738965   20304 cri.go:89] found id: "05400eb2fd5eb5dab037918d1625bce04346352c5eb622cb825bc5d8be2d837e"
	I1019 16:23:15.738968   20304 cri.go:89] found id: "2e93d36349d27da10dcd3ea12fffbcf50506c87892ebdc8c2ea984ea7cc55440"
	I1019 16:23:15.738972   20304 cri.go:89] found id: "c503d7edb96f3d0cc1492dbef9c424367ec8ca4b0115a5c993d0ff8d12eedf93"
	I1019 16:23:15.738974   20304 cri.go:89] found id: "bfa49cfea4019265b1d3841ebbabe50ea8270a753eb9f5e68565515d284992d5"
	I1019 16:23:15.738977   20304 cri.go:89] found id: "aeccc8f632779e72d498e4e2c918b7701815fd1595f5123cba0d3e3306a5b8fa"
	I1019 16:23:15.738979   20304 cri.go:89] found id: "d3273937d7efc2090dbf92878ee357891bac8e1fd1ac928ff06fa33207f8acd6"
	I1019 16:23:15.738984   20304 cri.go:89] found id: "6ef8a71fe5d3990e63bf621adc1773300aea754a4ae4ba88cb908af6033475fc"
	I1019 16:23:15.738986   20304 cri.go:89] found id: "6e2091eec84de97496ef71ec05866a5e84fc72136ebd8fba19aae91efa63d9e3"
	I1019 16:23:15.738989   20304 cri.go:89] found id: "037c0332147ce352c19f75a68693f361440e2fb38139735ad856f224c9190c1f"
	I1019 16:23:15.738991   20304 cri.go:89] found id: "65863d6eb03db19db922d886803cf53a0cb4c1aa50b354e37b1d7abdbe4cd53e"
	I1019 16:23:15.738994   20304 cri.go:89] found id: "28dca4a98797d39e429fa32ded7e16d2d4d1e675523de1c726f9a2abb9cf1ec4"
	I1019 16:23:15.738996   20304 cri.go:89] found id: "893241a14d701c6bdcba867e2fd1fe8a4cd5738e5119b7e04c7002b291849f9d"
	I1019 16:23:15.739000   20304 cri.go:89] found id: "b929bd44832f67814ec657bdb9e012b7c97cd64985f113692cbc885e53eeb5ed"
	I1019 16:23:15.739002   20304 cri.go:89] found id: "ef3b3e7a4894884f7c8bbd755c56a7700dc943a7928aa0e52f4dbdc81806fd1b"
	I1019 16:23:15.739007   20304 cri.go:89] found id: "8d61047c5353c2c688e2f172eaaf6ad4bd6c3816d4a19f2de39251e386867693"
	I1019 16:23:15.739009   20304 cri.go:89] found id: "91694d16fb2b0ece4c3071344d073973e3e529ad5d23e63ffcbb1aa13844606e"
	I1019 16:23:15.739012   20304 cri.go:89] found id: "00766cde139826cec0b8b6d979bc06db1a8e4c72cc21abb45e687eb3efc6283c"
	I1019 16:23:15.739014   20304 cri.go:89] found id: "49e88f9620ecba4c70cd7b60c353fac9c19df294f9e1cedfb43e5e9d57bb6468"
	I1019 16:23:15.739017   20304 cri.go:89] found id: "64ffa4d775be3d28668524f2c7dd8e1c6ad5eeba22fb96d610f66530c5502bfd"
	I1019 16:23:15.739020   20304 cri.go:89] found id: "7ea12626c6ada213850e1cea66bee9586ca3b56572fe37a2f1d96af64e737eb5"
	I1019 16:23:15.739024   20304 cri.go:89] found id: "75207afa634f810f84239fed0d54861fdc8bcb8c5f5a1e05bb98430d84584d65"
	I1019 16:23:15.739026   20304 cri.go:89] found id: ""
	I1019 16:23:15.739063   20304 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 16:23:15.753945   20304 out.go:203] 
	W1019 16:23:15.755281   20304 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:23:15Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:23:15Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 16:23:15.755303   20304 out.go:285] * 
	* 
	W1019 16:23:15.758599   20304 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 16:23:15.760145   20304 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-557770 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.24s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-66kws" [583f9bcd-aa6d-49aa-a883-8647ec131d3f] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.003829224s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-557770 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-557770 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (256.450441ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 16:23:06.897360   18851 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:23:06.897744   18851 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:23:06.897759   18851 out.go:374] Setting ErrFile to fd 2...
	I1019 16:23:06.897767   18851 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:23:06.898083   18851 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3731/.minikube/bin
	I1019 16:23:06.898460   18851 mustload.go:66] Loading cluster: addons-557770
	I1019 16:23:06.898991   18851 config.go:182] Loaded profile config "addons-557770": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:23:06.899013   18851 addons.go:607] checking whether the cluster is paused
	I1019 16:23:06.899171   18851 config.go:182] Loaded profile config "addons-557770": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:23:06.899190   18851 host.go:66] Checking if "addons-557770" exists ...
	I1019 16:23:06.899733   18851 cli_runner.go:164] Run: docker container inspect addons-557770 --format={{.State.Status}}
	I1019 16:23:06.921253   18851 ssh_runner.go:195] Run: systemctl --version
	I1019 16:23:06.921325   18851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557770
	I1019 16:23:06.940939   18851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/addons-557770/id_rsa Username:docker}
	I1019 16:23:07.038964   18851 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 16:23:07.039056   18851 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 16:23:07.070198   18851 cri.go:89] found id: "9d790d16e895919a1170fc8db9b5e7568e08763e98bd7e71211780308a212097"
	I1019 16:23:07.070222   18851 cri.go:89] found id: "5888ac56628ff93145c6aece5f1120cd8eb8decc9375b9ca12c2230ec4c1defb"
	I1019 16:23:07.070233   18851 cri.go:89] found id: "50a236ad43b3d311660fac7a7b4f30d2e7c648b9652ad78e1ba950060276f923"
	I1019 16:23:07.070237   18851 cri.go:89] found id: "05400eb2fd5eb5dab037918d1625bce04346352c5eb622cb825bc5d8be2d837e"
	I1019 16:23:07.070239   18851 cri.go:89] found id: "2e93d36349d27da10dcd3ea12fffbcf50506c87892ebdc8c2ea984ea7cc55440"
	I1019 16:23:07.070242   18851 cri.go:89] found id: "c503d7edb96f3d0cc1492dbef9c424367ec8ca4b0115a5c993d0ff8d12eedf93"
	I1019 16:23:07.070245   18851 cri.go:89] found id: "bfa49cfea4019265b1d3841ebbabe50ea8270a753eb9f5e68565515d284992d5"
	I1019 16:23:07.070248   18851 cri.go:89] found id: "aeccc8f632779e72d498e4e2c918b7701815fd1595f5123cba0d3e3306a5b8fa"
	I1019 16:23:07.070250   18851 cri.go:89] found id: "d3273937d7efc2090dbf92878ee357891bac8e1fd1ac928ff06fa33207f8acd6"
	I1019 16:23:07.070257   18851 cri.go:89] found id: "6ef8a71fe5d3990e63bf621adc1773300aea754a4ae4ba88cb908af6033475fc"
	I1019 16:23:07.070261   18851 cri.go:89] found id: "6e2091eec84de97496ef71ec05866a5e84fc72136ebd8fba19aae91efa63d9e3"
	I1019 16:23:07.070265   18851 cri.go:89] found id: "037c0332147ce352c19f75a68693f361440e2fb38139735ad856f224c9190c1f"
	I1019 16:23:07.070269   18851 cri.go:89] found id: "65863d6eb03db19db922d886803cf53a0cb4c1aa50b354e37b1d7abdbe4cd53e"
	I1019 16:23:07.070274   18851 cri.go:89] found id: "28dca4a98797d39e429fa32ded7e16d2d4d1e675523de1c726f9a2abb9cf1ec4"
	I1019 16:23:07.070282   18851 cri.go:89] found id: "893241a14d701c6bdcba867e2fd1fe8a4cd5738e5119b7e04c7002b291849f9d"
	I1019 16:23:07.070293   18851 cri.go:89] found id: "b929bd44832f67814ec657bdb9e012b7c97cd64985f113692cbc885e53eeb5ed"
	I1019 16:23:07.070300   18851 cri.go:89] found id: "ef3b3e7a4894884f7c8bbd755c56a7700dc943a7928aa0e52f4dbdc81806fd1b"
	I1019 16:23:07.070305   18851 cri.go:89] found id: "8d61047c5353c2c688e2f172eaaf6ad4bd6c3816d4a19f2de39251e386867693"
	I1019 16:23:07.070308   18851 cri.go:89] found id: "91694d16fb2b0ece4c3071344d073973e3e529ad5d23e63ffcbb1aa13844606e"
	I1019 16:23:07.070311   18851 cri.go:89] found id: "00766cde139826cec0b8b6d979bc06db1a8e4c72cc21abb45e687eb3efc6283c"
	I1019 16:23:07.070314   18851 cri.go:89] found id: "49e88f9620ecba4c70cd7b60c353fac9c19df294f9e1cedfb43e5e9d57bb6468"
	I1019 16:23:07.070322   18851 cri.go:89] found id: "64ffa4d775be3d28668524f2c7dd8e1c6ad5eeba22fb96d610f66530c5502bfd"
	I1019 16:23:07.070325   18851 cri.go:89] found id: "7ea12626c6ada213850e1cea66bee9586ca3b56572fe37a2f1d96af64e737eb5"
	I1019 16:23:07.070330   18851 cri.go:89] found id: "75207afa634f810f84239fed0d54861fdc8bcb8c5f5a1e05bb98430d84584d65"
	I1019 16:23:07.070332   18851 cri.go:89] found id: ""
	I1019 16:23:07.070387   18851 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 16:23:07.088405   18851 out.go:203] 
	W1019 16:23:07.089933   18851 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:23:07Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:23:07Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 16:23:07.089965   18851 out.go:285] * 
	* 
	W1019 16:23:07.093746   18851 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 16:23:07.096217   18851 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-557770 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (5.26s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (302.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-507544 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-507544 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-507544 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-507544 --alsologtostderr -v=1] stderr:
I1019 16:29:33.490107   45881 out.go:360] Setting OutFile to fd 1 ...
I1019 16:29:33.490267   45881 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 16:29:33.490279   45881 out.go:374] Setting ErrFile to fd 2...
I1019 16:29:33.490283   45881 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 16:29:33.490519   45881 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3731/.minikube/bin
I1019 16:29:33.490811   45881 mustload.go:66] Loading cluster: functional-507544
I1019 16:29:33.491219   45881 config.go:182] Loaded profile config "functional-507544": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1019 16:29:33.491580   45881 cli_runner.go:164] Run: docker container inspect functional-507544 --format={{.State.Status}}
I1019 16:29:33.510592   45881 host.go:66] Checking if "functional-507544" exists ...
I1019 16:29:33.510868   45881 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1019 16:29:33.571621   45881 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-19 16:29:33.560369942 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1019 16:29:33.571761   45881 api_server.go:166] Checking apiserver status ...
I1019 16:29:33.571822   45881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1019 16:29:33.571868   45881 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-507544
I1019 16:29:33.591738   45881 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/functional-507544/id_rsa Username:docker}
I1019 16:29:33.694764   45881 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4197/cgroup
W1019 16:29:33.703973   45881 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4197/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1019 16:29:33.704033   45881 ssh_runner.go:195] Run: ls
I1019 16:29:33.708120   45881 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I1019 16:29:33.712235   45881 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W1019 16:29:33.712277   45881 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1019 16:29:33.712423   45881 config.go:182] Loaded profile config "functional-507544": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1019 16:29:33.712431   45881 addons.go:70] Setting dashboard=true in profile "functional-507544"
I1019 16:29:33.712440   45881 addons.go:239] Setting addon dashboard=true in "functional-507544"
I1019 16:29:33.712466   45881 host.go:66] Checking if "functional-507544" exists ...
I1019 16:29:33.712770   45881 cli_runner.go:164] Run: docker container inspect functional-507544 --format={{.State.Status}}
I1019 16:29:33.733261   45881 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1019 16:29:33.734700   45881 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1019 16:29:33.735898   45881 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1019 16:29:33.735920   45881 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1019 16:29:33.736016   45881 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-507544
I1019 16:29:33.754387   45881 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/functional-507544/id_rsa Username:docker}
I1019 16:29:33.857566   45881 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1019 16:29:33.857596   45881 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1019 16:29:33.870726   45881 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1019 16:29:33.870750   45881 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1019 16:29:33.884247   45881 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1019 16:29:33.884273   45881 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1019 16:29:33.897315   45881 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1019 16:29:33.897339   45881 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1019 16:29:33.910762   45881 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
I1019 16:29:33.910788   45881 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1019 16:29:33.923863   45881 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1019 16:29:33.923890   45881 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1019 16:29:33.937762   45881 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1019 16:29:33.937787   45881 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1019 16:29:33.951341   45881 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1019 16:29:33.951367   45881 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1019 16:29:33.964685   45881 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1019 16:29:33.964711   45881 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1019 16:29:33.979354   45881 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1019 16:29:34.489831   45881 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-507544 addons enable metrics-server

                                                
                                                
I1019 16:29:34.491142   45881 addons.go:202] Writing out "functional-507544" config to set dashboard=true...
W1019 16:29:34.491376   45881 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1019 16:29:34.492019   45881 kapi.go:59] client config for functional-507544: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-3731/.minikube/profiles/functional-507544/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-3731/.minikube/profiles/functional-507544/client.key", CAFile:"/home/jenkins/minikube-integration/21683-3731/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819d20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1019 16:29:34.492653   45881 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1019 16:29:34.492680   45881 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1019 16:29:34.492687   45881 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1019 16:29:34.492698   45881 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1019 16:29:34.492706   45881 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1019 16:29:34.501409   45881 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  2c10312e-a33c-4519-9c59-c397cba0bed9 757 0 2025-10-19 16:29:34 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-10-19 16:29:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.111.10.181,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.111.10.181],IPFamilies:[IPv4],AllocateLoadBalance
rNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1019 16:29:34.501532   45881 out.go:285] * Launching proxy ...
* Launching proxy ...
I1019 16:29:34.501581   45881 dashboard.go:154] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-507544 proxy --port 36195]
I1019 16:29:34.501807   45881 dashboard.go:159] Waiting for kubectl to output host:port ...
I1019 16:29:34.553315   45881 dashboard.go:177] proxy stdout: Starting to serve on 127.0.0.1:36195
W1019 16:29:34.553364   45881 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I1019 16:29:34.562101   45881 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[27690452-fc1e-4655-918e-4178bf7cc07b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:34 GMT]] Body:0xc000417740 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004968c0 TLS:<nil>}
I1019 16:29:34.562215   45881 retry.go:31] will retry after 61.021µs: Temporary Error: unexpected response code: 503
I1019 16:29:34.566425   45881 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9123119e-f9c9-4e89-9da3-55ab8d0ef4ef] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:34 GMT]] Body:0xc000417840 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000713e00 TLS:<nil>}
I1019 16:29:34.566503   45881 retry.go:31] will retry after 208.841µs: Temporary Error: unexpected response code: 503
I1019 16:29:34.570707   45881 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[aa57b608-9f2e-4dd2-b5ba-11bb2ff78941] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:34 GMT]] Body:0xc000896380 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005b0000 TLS:<nil>}
I1019 16:29:34.570820   45881 retry.go:31] will retry after 230.8µs: Temporary Error: unexpected response code: 503
I1019 16:29:34.574973   45881 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[63b5e1e1-8d0e-44b4-8f03-550171813c93] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:34 GMT]] Body:0xc000417900 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000496a00 TLS:<nil>}
I1019 16:29:34.575043   45881 retry.go:31] will retry after 504.561µs: Temporary Error: unexpected response code: 503
I1019 16:29:34.578822   45881 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5ba17119-d6ae-4248-b866-cafd6df31902] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:34 GMT]] Body:0xc000896480 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005b0140 TLS:<nil>}
I1019 16:29:34.578881   45881 retry.go:31] will retry after 504.078µs: Temporary Error: unexpected response code: 503
I1019 16:29:34.582534   45881 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[df5a7f8c-512b-46ad-9301-09491a15911f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:34 GMT]] Body:0xc000417a00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000496b40 TLS:<nil>}
I1019 16:29:34.582628   45881 retry.go:31] will retry after 1.015198ms: Temporary Error: unexpected response code: 503
I1019 16:29:34.586044   45881 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1abd49d6-c096-4b4a-961d-587d04c684ef] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:34 GMT]] Body:0xc0008965c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005b0280 TLS:<nil>}
I1019 16:29:34.586127   45881 retry.go:31] will retry after 626.556µs: Temporary Error: unexpected response code: 503
I1019 16:29:34.591449   45881 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[012692e5-eb69-4d27-beb7-107dbc3cde08] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:34 GMT]] Body:0xc00038d7c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000496c80 TLS:<nil>}
I1019 16:29:34.591517   45881 retry.go:31] will retry after 1.912125ms: Temporary Error: unexpected response code: 503
I1019 16:29:34.596062   45881 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0288b84f-e508-4837-ad10-8b28415d3cb1] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:34 GMT]] Body:0xc000896880 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000316780 TLS:<nil>}
I1019 16:29:34.596135   45881 retry.go:31] will retry after 1.437368ms: Temporary Error: unexpected response code: 503
I1019 16:29:34.602854   45881 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5b1f5974-6a40-4bec-9582-cc7c2d1649b3] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:34 GMT]] Body:0xc001798040 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000496dc0 TLS:<nil>}
I1019 16:29:34.602911   45881 retry.go:31] will retry after 4.597119ms: Temporary Error: unexpected response code: 503
I1019 16:29:34.610692   45881 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b30d7f6e-7fb7-45e4-a456-f9d77fd9b8f9] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:34 GMT]] Body:0xc000767440 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003168c0 TLS:<nil>}
I1019 16:29:34.610751   45881 retry.go:31] will retry after 6.179246ms: Temporary Error: unexpected response code: 503
I1019 16:29:34.621140   45881 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b80bbd7e-8408-4271-a103-ba52f13bf265] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:34 GMT]] Body:0xc001798100 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003bf400 TLS:<nil>}
I1019 16:29:34.621218   45881 retry.go:31] will retry after 10.889964ms: Temporary Error: unexpected response code: 503
I1019 16:29:34.635292   45881 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[406b7661-1623-4e9b-93f0-46db6c81d7d7] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:34 GMT]] Body:0xc000767500 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000316a00 TLS:<nil>}
I1019 16:29:34.635362   45881 retry.go:31] will retry after 11.131356ms: Temporary Error: unexpected response code: 503
I1019 16:29:34.650982   45881 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9d10502a-8667-4e48-a875-474f229b55ee] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:34 GMT]] Body:0xc0008971c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003bf540 TLS:<nil>}
I1019 16:29:34.651094   45881 retry.go:31] will retry after 22.486956ms: Temporary Error: unexpected response code: 503
I1019 16:29:34.678826   45881 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7cc03fdd-3c08-4257-9c55-811e9d176c3a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:34 GMT]] Body:0xc000767600 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000496f00 TLS:<nil>}
I1019 16:29:34.678877   45881 retry.go:31] will retry after 42.22627ms: Temporary Error: unexpected response code: 503
I1019 16:29:34.725392   45881 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[14234f34-8d96-4892-b1c1-473171d9a6d1] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:34 GMT]] Body:0xc000897480 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003bf7c0 TLS:<nil>}
I1019 16:29:34.725503   45881 retry.go:31] will retry after 42.276113ms: Temporary Error: unexpected response code: 503
I1019 16:29:34.772390   45881 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ae9d51ab-028b-41de-9559-20553373ac66] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:34 GMT]] Body:0xc000767700 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000497040 TLS:<nil>}
I1019 16:29:34.772463   45881 retry.go:31] will retry after 50.317759ms: Temporary Error: unexpected response code: 503
I1019 16:29:34.826314   45881 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6ccfdcb1-a2a6-4e8c-b5c4-064646c711e0] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:34 GMT]] Body:0xc000767780 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000497180 TLS:<nil>}
I1019 16:29:34.826377   45881 retry.go:31] will retry after 123.964971ms: Temporary Error: unexpected response code: 503
I1019 16:29:34.953493   45881 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9e744fd8-3ced-4586-8536-7ac41a2cf9f7] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:34 GMT]] Body:0xc000897700 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003bf900 TLS:<nil>}
I1019 16:29:34.953557   45881 retry.go:31] will retry after 213.825195ms: Temporary Error: unexpected response code: 503
I1019 16:29:35.170962   45881 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[30bafe93-ed3a-4c7c-922b-f8400871ed2b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:35 GMT]] Body:0xc001798180 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000497400 TLS:<nil>}
I1019 16:29:35.171025   45881 retry.go:31] will retry after 231.655977ms: Temporary Error: unexpected response code: 503
I1019 16:29:35.406320   45881 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7333cb52-00a4-4c9c-919d-72f93c6ecff1] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:35 GMT]] Body:0xc000897800 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000316b40 TLS:<nil>}
I1019 16:29:35.406390   45881 retry.go:31] will retry after 464.11246ms: Temporary Error: unexpected response code: 503
I1019 16:29:35.873851   45881 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ea027450-e3b2-4448-b488-8d92e4f43cdf] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:35 GMT]] Body:0xc001798280 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000497540 TLS:<nil>}
I1019 16:29:35.873903   45881 retry.go:31] will retry after 270.779267ms: Temporary Error: unexpected response code: 503
I1019 16:29:36.148368   45881 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[28dfcd60-1be3-4fe9-a521-63097571e567] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:36 GMT]] Body:0xc0008979c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000316c80 TLS:<nil>}
I1019 16:29:36.148446   45881 retry.go:31] will retry after 460.50131ms: Temporary Error: unexpected response code: 503
I1019 16:29:36.612247   45881 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[65edd316-11cf-4f4e-ab7c-f35309f63934] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:36 GMT]] Body:0xc000767880 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000497680 TLS:<nil>}
I1019 16:29:36.612307   45881 retry.go:31] will retry after 651.503366ms: Temporary Error: unexpected response code: 503
I1019 16:29:37.267018   45881 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[436c62ae-2108-4e9e-941e-efdb45a043e4] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:37 GMT]] Body:0xc001798380 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003bfa40 TLS:<nil>}
I1019 16:29:37.267103   45881 retry.go:31] will retry after 1.71833049s: Temporary Error: unexpected response code: 503
I1019 16:29:38.989197   45881 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[34cd433f-19eb-4011-ad3c-4af89487e267] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:38 GMT]] Body:0xc000897b40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000316dc0 TLS:<nil>}
I1019 16:29:38.989285   45881 retry.go:31] will retry after 2.055675995s: Temporary Error: unexpected response code: 503
I1019 16:29:41.048909   45881 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b419e2ac-1431-4d9e-81b5-7a83ed0a1b28] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:41 GMT]] Body:0xc0007679c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004977c0 TLS:<nil>}
I1019 16:29:41.048968   45881 retry.go:31] will retry after 2.897106116s: Temporary Error: unexpected response code: 503
I1019 16:29:43.950206   45881 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9658dc73-3dae-4ee8-97bb-7a9d9d3f6aab] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:43 GMT]] Body:0xc001798440 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003bfb80 TLS:<nil>}
I1019 16:29:43.950265   45881 retry.go:31] will retry after 7.990141421s: Temporary Error: unexpected response code: 503
I1019 16:29:51.946119   45881 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[50b7d991-0924-4f16-a98f-a1d8ef9f44e1] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:51 GMT]] Body:0xc000897c40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003bfe00 TLS:<nil>}
I1019 16:29:51.946182   45881 retry.go:31] will retry after 5.744212878s: Temporary Error: unexpected response code: 503
I1019 16:29:57.696332   45881 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f668956c-d81e-481d-a157-8c5bbb2b6c2f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:57 GMT]] Body:0xc000767b40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000316f00 TLS:<nil>}
I1019 16:29:57.696392   45881 retry.go:31] will retry after 18.104325481s: Temporary Error: unexpected response code: 503
I1019 16:30:15.804642   45881 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0b7bad00-a3da-48dc-bf48-58d4683a9862] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:30:15 GMT]] Body:0xc001798540 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000497900 TLS:<nil>}
I1019 16:30:15.804737   45881 retry.go:31] will retry after 15.093986361s: Temporary Error: unexpected response code: 503
I1019 16:30:30.904306   45881 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d614db23-7df0-4438-932b-f50c3a94fa49] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:30:30 GMT]] Body:0xc0017985c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001904000 TLS:<nil>}
I1019 16:30:30.904369   45881 retry.go:31] will retry after 23.36625615s: Temporary Error: unexpected response code: 503
I1019 16:30:54.274092   45881 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0e85a970-6a36-4ac7-a883-d0c443cb2d33] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:30:54 GMT]] Body:0xc000897d40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000317040 TLS:<nil>}
I1019 16:30:54.274155   45881 retry.go:31] will retry after 37.355730289s: Temporary Error: unexpected response code: 503
I1019 16:31:31.635690   45881 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9ac8ee27-1ada-45a7-b5c1-82a9c238f019] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:31:31 GMT]] Body:0xc000767c40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00189e000 TLS:<nil>}
I1019 16:31:31.635753   45881 retry.go:31] will retry after 1m3.623388767s: Temporary Error: unexpected response code: 503
I1019 16:32:35.262271   45881 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[116497c8-3e3a-4a05-8882-43403eca5964] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:32:35 GMT]] Body:0xc0004160c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001904140 TLS:<nil>}
I1019 16:32:35.262349   45881 retry.go:31] will retry after 55.310578327s: Temporary Error: unexpected response code: 503
I1019 16:33:30.577259   45881 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e6f1c8c4-36d2-46df-824c-8689930d3420] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:33:30 GMT]] Body:0xc001798040 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001904280 TLS:<nil>}
I1019 16:33:30.577360   45881 retry.go:31] will retry after 1m7.608696223s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-507544
helpers_test.go:243: (dbg) docker inspect functional-507544:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "add43b7ec9e00c44964e724714f8e6fd86a5a7f5ea20fd7752ea76e8ee9a7112",
	        "Created": "2025-10-19T16:26:45.122198852Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 31618,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T16:26:45.156822472Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/add43b7ec9e00c44964e724714f8e6fd86a5a7f5ea20fd7752ea76e8ee9a7112/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/add43b7ec9e00c44964e724714f8e6fd86a5a7f5ea20fd7752ea76e8ee9a7112/hostname",
	        "HostsPath": "/var/lib/docker/containers/add43b7ec9e00c44964e724714f8e6fd86a5a7f5ea20fd7752ea76e8ee9a7112/hosts",
	        "LogPath": "/var/lib/docker/containers/add43b7ec9e00c44964e724714f8e6fd86a5a7f5ea20fd7752ea76e8ee9a7112/add43b7ec9e00c44964e724714f8e6fd86a5a7f5ea20fd7752ea76e8ee9a7112-json.log",
	        "Name": "/functional-507544",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-507544:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-507544",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "add43b7ec9e00c44964e724714f8e6fd86a5a7f5ea20fd7752ea76e8ee9a7112",
	                "LowerDir": "/var/lib/docker/overlay2/8980f1af902bf50d500cfed38df55e903ab2a49d3d08bd2b7c474ec81388d736-init/diff:/var/lib/docker/overlay2/0516a6de822f96a429e000bb6523437ca93a66a6aecaba561c6abf45d0d1defe/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8980f1af902bf50d500cfed38df55e903ab2a49d3d08bd2b7c474ec81388d736/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8980f1af902bf50d500cfed38df55e903ab2a49d3d08bd2b7c474ec81388d736/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8980f1af902bf50d500cfed38df55e903ab2a49d3d08bd2b7c474ec81388d736/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-507544",
	                "Source": "/var/lib/docker/volumes/functional-507544/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-507544",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-507544",
	                "name.minikube.sigs.k8s.io": "functional-507544",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "08d99069f25412728f0de1767e870ab3d5e37b37c9be1e53bb041c372f124f33",
	            "SandboxKey": "/var/run/docker/netns/08d99069f254",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-507544": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9e:97:3c:cd:99:b6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "190b1e4dbc4b84704bafdf33f14b0c728242ffe12133f3a7d8f637228926fb2b",
	                    "EndpointID": "9fefd75494de70091fe4fdf69631663c727e7236d9987ed273d484d28bb5b3f9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-507544",
	                        "add43b7ec9e0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-507544 -n functional-507544
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-507544 logs -n 25: (1.264386453s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh       │ functional-507544 ssh sudo systemctl is-active containerd                                                                                                       │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │                     │
	│ ssh       │ functional-507544 ssh findmnt -T /mount2                                                                                                                        │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ ssh       │ functional-507544 ssh findmnt -T /mount3                                                                                                                        │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ mount     │ -p functional-507544 --kill=true                                                                                                                                │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │                     │
	│ addons    │ functional-507544 addons list                                                                                                                                   │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ addons    │ functional-507544 addons list -o json                                                                                                                           │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ ssh       │ functional-507544 ssh sudo cat /etc/ssl/certs/7228.pem                                                                                                          │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ ssh       │ functional-507544 ssh sudo cat /usr/share/ca-certificates/7228.pem                                                                                              │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ image     │ functional-507544 image load --daemon kicbase/echo-server:functional-507544 --alsologtostderr                                                                   │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ ssh       │ functional-507544 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                        │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ ssh       │ functional-507544 ssh sudo cat /etc/ssl/certs/72282.pem                                                                                                         │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ image     │ functional-507544 image ls                                                                                                                                      │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ ssh       │ functional-507544 ssh sudo cat /usr/share/ca-certificates/72282.pem                                                                                             │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ image     │ functional-507544 image load --daemon kicbase/echo-server:functional-507544 --alsologtostderr                                                                   │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ ssh       │ functional-507544 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ ssh       │ functional-507544 ssh sudo cat /etc/test/nested/copy/7228/hosts                                                                                                 │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ image     │ functional-507544 image ls                                                                                                                                      │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ dashboard │ --url --port 36195 -p functional-507544 --alsologtostderr -v=1                                                                                                  │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │                     │
	│ image     │ functional-507544 image load --daemon kicbase/echo-server:functional-507544 --alsologtostderr                                                                   │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ image     │ functional-507544 image ls                                                                                                                                      │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ image     │ functional-507544 image save kicbase/echo-server:functional-507544 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ image     │ functional-507544 image rm kicbase/echo-server:functional-507544 --alsologtostderr                                                                              │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ image     │ functional-507544 image ls                                                                                                                                      │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ image     │ functional-507544 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ image     │ functional-507544 image save --daemon kicbase/echo-server:functional-507544 --alsologtostderr                                                                   │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 16:29:30
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 16:29:30.118385   44149 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:29:30.118477   44149 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:29:30.118484   44149 out.go:374] Setting ErrFile to fd 2...
	I1019 16:29:30.118488   44149 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:29:30.118804   44149 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3731/.minikube/bin
	I1019 16:29:30.119280   44149 out.go:368] Setting JSON to false
	I1019 16:29:30.120292   44149 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":716,"bootTime":1760890654,"procs":246,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 16:29:30.120370   44149 start.go:143] virtualization: kvm guest
	I1019 16:29:30.122096   44149 out.go:179] * [functional-507544] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1019 16:29:30.123412   44149 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 16:29:30.123412   44149 notify.go:221] Checking for updates...
	I1019 16:29:30.124663   44149 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 16:29:30.125918   44149 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-3731/kubeconfig
	I1019 16:29:30.127440   44149 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-3731/.minikube
	I1019 16:29:30.128697   44149 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 16:29:30.130309   44149 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 16:29:30.131905   44149 config.go:182] Loaded profile config "functional-507544": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:29:30.132380   44149 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 16:29:30.156613   44149 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1019 16:29:30.156707   44149 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 16:29:30.214136   44149 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-19 16:29:30.20345749 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 16:29:30.214295   44149 docker.go:319] overlay module found
	I1019 16:29:30.216279   44149 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1019 16:29:30.217643   44149 start.go:309] selected driver: docker
	I1019 16:29:30.217661   44149 start.go:930] validating driver "docker" against &{Name:functional-507544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-507544 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 16:29:30.217741   44149 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 16:29:30.219580   44149 out.go:203] 
	W1019 16:29:30.220824   44149 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1019 16:29:30.221973   44149 out.go:203] 
	
	
	==> CRI-O <==
	Oct 19 16:30:26 functional-507544 crio[3576]: time="2025-10-19T16:30:26.148826803Z" level=info msg="Stopped pod sandbox (already stopped): c352624d3b9119c7b5d00c2b3a0c2516c55e39f3b0f00f6e1aa8bafeec60886c" id=9b19267e-e6f7-4315-9af2-675c9545165e name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 19 16:30:26 functional-507544 crio[3576]: time="2025-10-19T16:30:26.149148506Z" level=info msg="Removing pod sandbox: c352624d3b9119c7b5d00c2b3a0c2516c55e39f3b0f00f6e1aa8bafeec60886c" id=95be3a8b-4247-47de-8dd4-06293b101bc3 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 19 16:30:26 functional-507544 crio[3576]: time="2025-10-19T16:30:26.152132995Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 19 16:30:26 functional-507544 crio[3576]: time="2025-10-19T16:30:26.15222103Z" level=info msg="Removed pod sandbox: c352624d3b9119c7b5d00c2b3a0c2516c55e39f3b0f00f6e1aa8bafeec60886c" id=95be3a8b-4247-47de-8dd4-06293b101bc3 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 19 16:30:36 functional-507544 crio[3576]: time="2025-10-19T16:30:36.065633669Z" level=info msg="Pulling image: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=c7ee1a4b-9b41-4f30-8c3b-92ab42aca218 name=/runtime.v1.ImageService/PullImage
	Oct 19 16:30:36 functional-507544 crio[3576]: time="2025-10-19T16:30:36.111653865Z" level=info msg="Trying to access \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Oct 19 16:30:36 functional-507544 crio[3576]: time="2025-10-19T16:30:36.539848719Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=97cb5a2b-838c-41cf-ba7d-7c79eafb4927 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:30:36 functional-507544 crio[3576]: time="2025-10-19T16:30:36.540079935Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=97cb5a2b-838c-41cf-ba7d-7c79eafb4927 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:30:36 functional-507544 crio[3576]: time="2025-10-19T16:30:36.540146962Z" level=info msg="Neither image nor artfiact docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c found" id=97cb5a2b-838c-41cf-ba7d-7c79eafb4927 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:30:48 functional-507544 crio[3576]: time="2025-10-19T16:30:48.148581424Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=fa5560ee-22fc-4f52-865e-82cd99849ac6 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:30:48 functional-507544 crio[3576]: time="2025-10-19T16:30:48.148799986Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=fa5560ee-22fc-4f52-865e-82cd99849ac6 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:30:48 functional-507544 crio[3576]: time="2025-10-19T16:30:48.14884429Z" level=info msg="Neither image nor artfiact docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c found" id=fa5560ee-22fc-4f52-865e-82cd99849ac6 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:31:37 functional-507544 crio[3576]: time="2025-10-19T16:31:37.201645654Z" level=info msg="Trying to access \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Oct 19 16:32:07 functional-507544 crio[3576]: time="2025-10-19T16:32:07.852197772Z" level=info msg="Pulling image: docker.io/nginx:latest" id=d1477115-8726-45d6-8591-1242d4f95e85 name=/runtime.v1.ImageService/PullImage
	Oct 19 16:32:07 functional-507544 crio[3576]: time="2025-10-19T16:32:07.856544065Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Oct 19 16:32:08 functional-507544 crio[3576]: time="2025-10-19T16:32:08.792766723Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=d47ce273-439c-4690-8215-4b3e0cd423d2 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:32:08 functional-507544 crio[3576]: time="2025-10-19T16:32:08.792980144Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=d47ce273-439c-4690-8215-4b3e0cd423d2 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:32:08 functional-507544 crio[3576]: time="2025-10-19T16:32:08.793039372Z" level=info msg="Neither image nor artfiact docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 found" id=d47ce273-439c-4690-8215-4b3e0cd423d2 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:32:22 functional-507544 crio[3576]: time="2025-10-19T16:32:22.148227283Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=e9015535-24b0-4569-b4a8-1e429e265541 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:32:22 functional-507544 crio[3576]: time="2025-10-19T16:32:22.148419886Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=e9015535-24b0-4569-b4a8-1e429e265541 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:32:22 functional-507544 crio[3576]: time="2025-10-19T16:32:22.148465196Z" level=info msg="Neither image nor artfiact docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 found" id=e9015535-24b0-4569-b4a8-1e429e265541 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:32:38 functional-507544 crio[3576]: time="2025-10-19T16:32:38.503578793Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Oct 19 16:33:09 functional-507544 crio[3576]: time="2025-10-19T16:33:09.152168719Z" level=info msg="Pulling image: docker.io/mysql:5.7" id=50ddebf5-dd5e-43bb-a11a-2e6f74361ffc name=/runtime.v1.ImageService/PullImage
	Oct 19 16:33:09 functional-507544 crio[3576]: time="2025-10-19T16:33:09.15648869Z" level=info msg="Trying to access \"docker.io/library/mysql:5.7\""
	Oct 19 16:34:10 functional-507544 crio[3576]: time="2025-10-19T16:34:10.24769623Z" level=info msg="Trying to access \"docker.io/library/mysql:5.7\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	874c2ff14b97f       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   5 minutes ago       Exited              mount-munger              0                   7580ba9d2c75e       busybox-mount                               default
	f909fd2f1f12b       docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e       5 minutes ago       Running             nginx                     0                   0136c3b1d2067       nginx-svc                                   default
	1d5ff7dca36a9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       2                   cb2b6d0554192       storage-provisioner                         kube-system
	bd130c1088bf7       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      5 minutes ago       Running             kube-apiserver            0                   f1c19208fe76e       kube-apiserver-functional-507544            kube-system
	2996708114c16       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      5 minutes ago       Running             kube-scheduler            1                   3a97633bdec36       kube-scheduler-functional-507544            kube-system
	6f9d68db6d3d6       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      5 minutes ago       Running             kube-controller-manager   1                   c167d44548f0f       kube-controller-manager-functional-507544   kube-system
	232b3554efed3       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      5 minutes ago       Running             etcd                      1                   2c3aea9080115       etcd-functional-507544                      kube-system
	b1a6aad7bccb7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Exited              storage-provisioner       1                   cb2b6d0554192       storage-provisioner                         kube-system
	a6ac3c40d3695       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      6 minutes ago       Running             coredns                   1                   3cd7aae4f81be       coredns-66bc5c9577-z4xwl                    kube-system
	d105c28b8985b       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      6 minutes ago       Running             kube-proxy                1                   c4828939376a2       kube-proxy-rwnpm                            kube-system
	10025537dc3ab       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      6 minutes ago       Running             kindnet-cni               1                   e9657439b413e       kindnet-mvc2p                               kube-system
	3cdd80e1fd14a       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      6 minutes ago       Exited              coredns                   0                   3cd7aae4f81be       coredns-66bc5c9577-z4xwl                    kube-system
	c4f6db9ff93af       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      7 minutes ago       Exited              kube-proxy                0                   c4828939376a2       kube-proxy-rwnpm                            kube-system
	8fcdf9dec3441       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      7 minutes ago       Exited              kindnet-cni               0                   e9657439b413e       kindnet-mvc2p                               kube-system
	c992a57ea0446       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      7 minutes ago       Exited              etcd                      0                   2c3aea9080115       etcd-functional-507544                      kube-system
	61c5eb9cc5552       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      7 minutes ago       Exited              kube-scheduler            0                   3a97633bdec36       kube-scheduler-functional-507544            kube-system
	5886e461a7b4a       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      7 minutes ago       Exited              kube-controller-manager   0                   c167d44548f0f       kube-controller-manager-functional-507544   kube-system
	
	
	==> coredns [3cdd80e1fd14ae4035e4531b71f7296d418ddaf9b3f2faf0be415a36ca1d2613] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53096 - 64513 "HINFO IN 5388708912172919926.3799386503118995872. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.101589867s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a6ac3c40d36958a6de862293b761e2d75aeab587854de7d49bfae769f19fb001] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58675 - 8925 "HINFO IN 3256408165590317714.5502113091922773666. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.058726274s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               functional-507544
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-507544
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34
	                    minikube.k8s.io/name=functional-507544
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T16_27_02_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 16:26:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-507544
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 16:34:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 16:34:13 +0000   Sun, 19 Oct 2025 16:26:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 16:34:13 +0000   Sun, 19 Oct 2025 16:26:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 16:34:13 +0000   Sun, 19 Oct 2025 16:26:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 16:34:13 +0000   Sun, 19 Oct 2025 16:27:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-507544
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                8e1900d7-85d2-490a-a8f6-e11dcc838551
	  Boot ID:                    74c6737e-3c93-45ee-b810-4373d2d70b9b
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-mv5h7                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m16s
	  default                     mysql-5bb876957f-vgwqp                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     4m58s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m13s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m59s
	  kube-system                 coredns-66bc5c9577-z4xwl                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     7m27s
	  kube-system                 etcd-functional-507544                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7m33s
	  kube-system                 kindnet-mvc2p                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      7m27s
	  kube-system                 kube-apiserver-functional-507544              250m (3%)     0 (0%)      0 (0%)           0 (0%)         5m45s
	  kube-system                 kube-controller-manager-functional-507544     200m (2%)     0 (0%)      0 (0%)           0 (0%)         7m33s
	  kube-system                 kube-proxy-rwnpm                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m27s
	  kube-system                 kube-scheduler-functional-507544              100m (1%)     0 (0%)      0 (0%)           0 (0%)         7m33s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m27s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-v4d72    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-wqsqt         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m26s                  kube-proxy       
	  Normal  Starting                 5m29s                  kube-proxy       
	  Normal  Starting                 7m38s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m38s (x8 over 7m38s)  kubelet          Node functional-507544 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m38s (x8 over 7m38s)  kubelet          Node functional-507544 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m38s (x8 over 7m38s)  kubelet          Node functional-507544 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    7m33s                  kubelet          Node functional-507544 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  7m33s                  kubelet          Node functional-507544 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     7m33s                  kubelet          Node functional-507544 status is now: NodeHasSufficientPID
	  Normal  Starting                 7m33s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           7m28s                  node-controller  Node functional-507544 event: Registered Node functional-507544 in Controller
	  Normal  NodeReady                6m46s                  kubelet          Node functional-507544 status is now: NodeReady
	  Normal  Starting                 6m8s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m8s (x8 over 6m8s)    kubelet          Node functional-507544 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m8s (x8 over 6m8s)    kubelet          Node functional-507544 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m8s (x8 over 6m8s)    kubelet          Node functional-507544 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m43s                  node-controller  Node functional-507544 event: Registered Node functional-507544 in Controller
	
	
	==> dmesg <==
	[  +0.102617] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027869] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.458770] kauditd_printk_skb: 47 callbacks suppressed
	[Oct19 16:23] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.054620] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023908] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023878] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023848] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023943] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +2.047764] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +4.031517] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +8.127172] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[ +16.382276] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[Oct19 16:24] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	
	
	==> etcd [232b3554efed3e895da66259cb238b385489998d30cbf9437a17db6348583118] <==
	{"level":"warn","ts":"2025-10-19T16:28:47.138344Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.147387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.156109Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.162655Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.168752Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.174569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.180801Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.187323Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.193536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.201096Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.209599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.216364Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.222822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.230208Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.237492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.250556Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.257478Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.264056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.270509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.277016Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.283231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.295941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.303147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.309595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.356254Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39636","server-name":"","error":"EOF"}
	
	
	==> etcd [c992a57ea0446745a96ba364917b05db213b70a8a91f3769afc9ebce3fdf3850] <==
	{"level":"warn","ts":"2025-10-19T16:26:58.808666Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:26:58.814744Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:26:58.831321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:26:58.834820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:26:58.840818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:26:58.846932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:26:58.890385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52952","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-19T16:28:24.086788Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-19T16:28:24.086876Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-507544","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-19T16:28:24.086990Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-19T16:28:24.088460Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-19T16:28:24.088545Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-19T16:28:24.088577Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"warn","ts":"2025-10-19T16:28:24.088600Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-19T16:28:24.088642Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-19T16:28:24.088649Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-19T16:28:24.088644Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-19T16:28:24.088665Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-19T16:28:24.088664Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-19T16:28:24.088696Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-19T16:28:24.088706Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-19T16:28:24.090695Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-19T16:28:24.090752Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-19T16:28:24.090785Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-19T16:28:24.090794Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-507544","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 16:34:34 up 17 min,  0 user,  load average: 0.20, 0.50, 0.47
	Linux functional-507544 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [10025537dc3abc6d9b1d66732463d3b2b95aa6fe95fe6ce8440ffb8252db820f] <==
	I1019 16:32:34.392343       1 main.go:301] handling current node
	I1019 16:32:44.390723       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:32:44.390761       1 main.go:301] handling current node
	I1019 16:32:54.387662       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:32:54.387699       1 main.go:301] handling current node
	I1019 16:33:04.387804       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:33:04.387861       1 main.go:301] handling current node
	I1019 16:33:14.390907       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:33:14.390951       1 main.go:301] handling current node
	I1019 16:33:24.387650       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:33:24.387705       1 main.go:301] handling current node
	I1019 16:33:34.392861       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:33:34.392902       1 main.go:301] handling current node
	I1019 16:33:44.391111       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:33:44.391148       1 main.go:301] handling current node
	I1019 16:33:54.389731       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:33:54.389767       1 main.go:301] handling current node
	I1019 16:34:04.387704       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:34:04.387757       1 main.go:301] handling current node
	I1019 16:34:14.387729       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:34:14.387771       1 main.go:301] handling current node
	I1019 16:34:24.393012       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:34:24.393049       1 main.go:301] handling current node
	I1019 16:34:34.396520       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:34:34.396562       1 main.go:301] handling current node
	
	
	==> kindnet [8fcdf9dec34411c1d2d1bdbf7f4262661b43f3d0d72fdbb514c24d6552cb6e4f] <==
	I1019 16:27:07.894506       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 16:27:07.894803       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1019 16:27:07.894944       1 main.go:148] setting mtu 1500 for CNI 
	I1019 16:27:07.894962       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 16:27:07.894984       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T16:27:08Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 16:27:08.096111       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 16:27:08.096411       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 16:27:08.096445       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 16:27:08.096616       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1019 16:27:38.097604       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1019 16:27:38.097600       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1019 16:27:38.097611       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1019 16:27:38.097604       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1019 16:27:39.696719       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 16:27:39.696755       1 metrics.go:72] Registering metrics
	I1019 16:27:39.696818       1 controller.go:711] "Syncing nftables rules"
	I1019 16:27:48.104019       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:27:48.104059       1 main.go:301] handling current node
	I1019 16:27:58.100727       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:27:58.100769       1 main.go:301] handling current node
	I1019 16:28:08.100413       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:28:08.100450       1 main.go:301] handling current node
	
	
	==> kube-apiserver [bd130c1088bf7d1c124730d74a847b71c9821cf326a67aad1216cb07a547b96a] <==
	I1019 16:28:47.807638       1 cache.go:39] Caches are synced for autoregister controller
	I1019 16:28:47.809715       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1019 16:28:47.811965       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1019 16:28:47.816819       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1019 16:28:47.816846       1 policy_source.go:240] refreshing policies
	I1019 16:28:47.827112       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 16:28:48.710386       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1019 16:28:49.017195       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1019 16:28:49.018467       1 controller.go:667] quota admission added evaluator for: endpoints
	I1019 16:28:49.023654       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 16:28:49.241858       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 16:28:49.241858       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 16:28:49.523461       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1019 16:28:49.620178       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1019 16:28:49.673977       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 16:28:49.680303       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 16:28:51.304578       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1019 16:29:14.725009       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.108.56.157"}
	I1019 16:29:18.524389       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.109.202.206"}
	I1019 16:29:21.388791       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.98.23.83"}
	I1019 16:29:34.345873       1 controller.go:667] quota admission added evaluator for: namespaces
	I1019 16:29:34.466507       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.10.181"}
	I1019 16:29:34.481960       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.204.129"}
	E1019 16:29:34.615398       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:55370: use of closed network connection
	I1019 16:29:36.447894       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.100.19.161"}
	
	
	==> kube-controller-manager [5886e461a7b4a183fac570c337eba31221c4f8d80680f651e35b86312b3b4662] <==
	I1019 16:27:06.269895       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1019 16:27:06.270018       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1019 16:27:06.270100       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1019 16:27:06.270136       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1019 16:27:06.270243       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1019 16:27:06.270449       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1019 16:27:06.270505       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1019 16:27:06.270520       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1019 16:27:06.270493       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1019 16:27:06.270862       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1019 16:27:06.270975       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1019 16:27:06.271112       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1019 16:27:06.272386       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1019 16:27:06.273558       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1019 16:27:06.275820       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 16:27:06.278936       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1019 16:27:06.278986       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1019 16:27:06.279015       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1019 16:27:06.279024       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1019 16:27:06.279028       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1019 16:27:06.279131       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1019 16:27:06.284734       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="functional-507544" podCIDRs=["10.244.0.0/24"]
	I1019 16:27:06.285694       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 16:27:06.285701       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1019 16:27:51.226515       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [6f9d68db6d3d66d4b89c73b933115640996e2ad1584dc4733fec9eb8f8617cee] <==
	I1019 16:28:51.147320       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1019 16:28:51.147536       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1019 16:28:51.147633       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1019 16:28:51.148007       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1019 16:28:51.148025       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1019 16:28:51.149985       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1019 16:28:51.151271       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 16:28:51.152361       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1019 16:28:51.153569       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1019 16:28:51.153598       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1019 16:28:51.153610       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1019 16:28:51.153623       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1019 16:28:51.154771       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 16:28:51.154771       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1019 16:28:51.157958       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1019 16:28:51.160113       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1019 16:28:51.161272       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1019 16:28:51.163450       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1019 16:28:51.168958       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1019 16:29:34.405011       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1019 16:29:34.408954       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1019 16:29:34.412916       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1019 16:29:34.413153       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1019 16:29:34.416170       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1019 16:29:34.421693       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [c4f6db9ff93af54aa47d8330711b3aacaf2d4ed53868d324569d79661c893e86] <==
	I1019 16:27:07.757997       1 server_linux.go:53] "Using iptables proxy"
	I1019 16:27:07.829126       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 16:27:07.929844       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 16:27:07.929903       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1019 16:27:07.930034       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 16:27:07.948180       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 16:27:07.948245       1 server_linux.go:132] "Using iptables Proxier"
	I1019 16:27:07.953226       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 16:27:07.953625       1 server.go:527] "Version info" version="v1.34.1"
	I1019 16:27:07.953644       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 16:27:07.954969       1 config.go:309] "Starting node config controller"
	I1019 16:27:07.954988       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 16:27:07.954996       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 16:27:07.955009       1 config.go:200] "Starting service config controller"
	I1019 16:27:07.955013       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 16:27:07.955024       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 16:27:07.955040       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 16:27:07.955031       1 config.go:106] "Starting endpoint slice config controller"
	I1019 16:27:07.955093       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 16:27:08.055919       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1019 16:27:08.055962       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 16:27:08.055920       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [d105c28b8985b9425bc6dc11577787f84445b24a2501dbbf2c5479621ec7d4c5] <==
	E1019 16:28:14.055235       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-507544&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 16:28:14.925306       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-507544&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 16:28:17.992857       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-507544&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 16:28:21.964850       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-507544&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 16:28:42.435400       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-507544&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1019 16:29:05.054962       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 16:29:05.054999       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1019 16:29:05.055094       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 16:29:05.075205       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 16:29:05.075262       1 server_linux.go:132] "Using iptables Proxier"
	I1019 16:29:05.080962       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 16:29:05.081386       1 server.go:527] "Version info" version="v1.34.1"
	I1019 16:29:05.081402       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 16:29:05.082685       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 16:29:05.082707       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 16:29:05.082716       1 config.go:200] "Starting service config controller"
	I1019 16:29:05.082742       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 16:29:05.082836       1 config.go:106] "Starting endpoint slice config controller"
	I1019 16:29:05.082898       1 config.go:309] "Starting node config controller"
	I1019 16:29:05.082907       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 16:29:05.082920       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 16:29:05.083625       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 16:29:05.182940       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 16:29:05.183531       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 16:29:05.184724       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2996708114c16d776a6aff6f9fcc6dad7f1fef587753492e0a7a7981480bcf7c] <==
	I1019 16:28:46.382085       1 serving.go:386] Generated self-signed cert in-memory
	W1019 16:28:47.731959       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1019 16:28:47.732000       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1019 16:28:47.732014       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1019 16:28:47.732024       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1019 16:28:47.750471       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1019 16:28:47.750506       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 16:28:47.752862       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 16:28:47.752902       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 16:28:47.753175       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1019 16:28:47.753232       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1019 16:28:47.853949       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [61c5eb9cc5552cf24b68447b4ed0fbb9972f27fc884505108656b85504cc2ff2] <==
	E1019 16:26:59.303300       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1019 16:26:59.303338       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1019 16:26:59.303374       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1019 16:26:59.303445       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1019 16:26:59.303447       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1019 16:26:59.303520       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1019 16:26:59.303537       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1019 16:27:00.157669       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 16:27:00.187109       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1019 16:27:00.188168       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1019 16:27:00.235871       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1019 16:27:00.343078       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1019 16:27:00.351277       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1019 16:27:00.373492       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1019 16:27:00.375580       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1019 16:27:00.470255       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1019 16:27:00.471050       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1019 16:27:00.517324       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1019 16:27:00.899509       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 16:28:24.197112       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 16:28:24.197145       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1019 16:28:24.197196       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1019 16:28:24.197223       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1019 16:28:24.197236       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1019 16:28:24.197268       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 19 16:29:35 functional-507544 kubelet[4118]: I1019 16:29:35.044676    4118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lcq9t\" (UniqueName: \"kubernetes.io/projected/c41361db-2c04-432d-a44f-a792ea1f0ae0-kube-api-access-lcq9t\") on node \"functional-507544\" DevicePath \"\""
	Oct 19 16:29:35 functional-507544 kubelet[4118]: I1019 16:29:35.373253    4118 scope.go:117] "RemoveContainer" containerID="5def4a07893e70bd312c2a27f69c4439244983b339284dca3f27075fbc56c6c2"
	Oct 19 16:29:35 functional-507544 kubelet[4118]: I1019 16:29:35.382228    4118 scope.go:117] "RemoveContainer" containerID="5def4a07893e70bd312c2a27f69c4439244983b339284dca3f27075fbc56c6c2"
	Oct 19 16:29:35 functional-507544 kubelet[4118]: E1019 16:29:35.382741    4118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5def4a07893e70bd312c2a27f69c4439244983b339284dca3f27075fbc56c6c2\": container with ID starting with 5def4a07893e70bd312c2a27f69c4439244983b339284dca3f27075fbc56c6c2 not found: ID does not exist" containerID="5def4a07893e70bd312c2a27f69c4439244983b339284dca3f27075fbc56c6c2"
	Oct 19 16:29:35 functional-507544 kubelet[4118]: I1019 16:29:35.382793    4118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5def4a07893e70bd312c2a27f69c4439244983b339284dca3f27075fbc56c6c2"} err="failed to get container status \"5def4a07893e70bd312c2a27f69c4439244983b339284dca3f27075fbc56c6c2\": rpc error: code = NotFound desc = could not find container \"5def4a07893e70bd312c2a27f69c4439244983b339284dca3f27075fbc56c6c2\": container with ID starting with 5def4a07893e70bd312c2a27f69c4439244983b339284dca3f27075fbc56c6c2 not found: ID does not exist"
	Oct 19 16:29:35 functional-507544 kubelet[4118]: I1019 16:29:35.548782    4118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9f5da235-244b-493c-ab91-d4c7afa8f791\" (UniqueName: \"kubernetes.io/host-path/f1eb7495-9064-4ffe-979e-857122179f13-pvc-9f5da235-244b-493c-ab91-d4c7afa8f791\") pod \"sp-pod\" (UID: \"f1eb7495-9064-4ffe-979e-857122179f13\") " pod="default/sp-pod"
	Oct 19 16:29:35 functional-507544 kubelet[4118]: I1019 16:29:35.548855    4118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8m6v\" (UniqueName: \"kubernetes.io/projected/f1eb7495-9064-4ffe-979e-857122179f13-kube-api-access-q8m6v\") pod \"sp-pod\" (UID: \"f1eb7495-9064-4ffe-979e-857122179f13\") " pod="default/sp-pod"
	Oct 19 16:29:36 functional-507544 kubelet[4118]: I1019 16:29:36.150105    4118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c41361db-2c04-432d-a44f-a792ea1f0ae0" path="/var/lib/kubelet/pods/c41361db-2c04-432d-a44f-a792ea1f0ae0/volumes"
	Oct 19 16:29:36 functional-507544 kubelet[4118]: I1019 16:29:36.556342    4118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s558z\" (UniqueName: \"kubernetes.io/projected/2a109cd2-82cc-4066-94eb-c9d0775cb362-kube-api-access-s558z\") pod \"mysql-5bb876957f-vgwqp\" (UID: \"2a109cd2-82cc-4066-94eb-c9d0775cb362\") " pod="default/mysql-5bb876957f-vgwqp"
	Oct 19 16:29:46 functional-507544 kubelet[4118]: E1019 16:29:46.148618    4118 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-mv5h7" podUID="e68c1276-cc7a-4036-91b9-ba15632cc2bf"
	Oct 19 16:30:36 functional-507544 kubelet[4118]: E1019 16:30:36.065159    4118 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98
765c"
	Oct 19 16:30:36 functional-507544 kubelet[4118]: E1019 16:30:36.065226    4118 kuberuntime_image.go:43] "Failed to pull image" err="unable to pull image or OCI artifact: pull image err: initializing source docker://kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 19 16:30:36 functional-507544 kubelet[4118]: E1019 16:30:36.065444    4118 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-77bf4d6c4c-v4d72_kubernetes-dashboard(f72c10dc-e8cd-4faf-99bc-7c5d642cadce): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/inc
rease-rate-limit" logger="UnhandledError"
	Oct 19 16:30:36 functional-507544 kubelet[4118]: E1019 16:30:36.065495    4118 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-v4
d72" podUID="f72c10dc-e8cd-4faf-99bc-7c5d642cadce"
	Oct 19 16:30:36 functional-507544 kubelet[4118]: E1019 16:30:36.540475    4118 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests
: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-v4d72" podUID="f72c10dc-e8cd-4faf-99bc-7c5d642cadce"
	Oct 19 16:32:07 functional-507544 kubelet[4118]: E1019 16:32:07.851597    4118 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe
7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 19 16:32:07 functional-507544 kubelet[4118]: E1019 16:32:07.851680    4118 kuberuntime_image.go:43] "Failed to pull image" err="unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef
93"
	Oct 19 16:32:07 functional-507544 kubelet[4118]: E1019 16:32:07.851928    4118 kuberuntime_manager.go:1449] "Unhandled Error" err="container kubernetes-dashboard start failed in pod kubernetes-dashboard-855c9754f9-wqsqt_kubernetes-dashboard(cdb50ed8-0281-40b9-b352-190f2baaf640): ErrImagePull: unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit
. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 19 16:32:07 functional-507544 kubelet[4118]: E1019 16:32:07.851997    4118 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubern
etes-dashboard-855c9754f9-wqsqt" podUID="cdb50ed8-0281-40b9-b352-190f2baaf640"
	Oct 19 16:32:08 functional-507544 kubelet[4118]: E1019 16:32:08.793415    4118 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui
/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-wqsqt" podUID="cdb50ed8-0281-40b9-b352-190f2baaf640"
	Oct 19 16:33:09 functional-507544 kubelet[4118]: E1019 16:33:09.151615    4118 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 19 16:33:09 functional-507544 kubelet[4118]: E1019 16:33:09.151683    4118 kuberuntime_image.go:43] "Failed to pull image" err="unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 19 16:33:09 functional-507544 kubelet[4118]: E1019 16:33:09.151916    4118 kuberuntime_manager.go:1449] "Unhandled Error" err="container myfrontend start failed in pod sp-pod_default(f1eb7495-9064-4ffe-979e-857122179f13): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 19 16:33:09 functional-507544 kubelet[4118]: E1019 16:33:09.151968    4118 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="f1eb7495-9064-4ffe-979e-857122179f13"
	Oct 19 16:33:09 functional-507544 kubelet[4118]: E1019 16:33:09.942414    4118 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="f1eb7495-9064-4ffe-979e-857122179f13"
	
	
	==> storage-provisioner [1d5ff7dca36a908853bf80f1bdc7a8189fcd31580bdeff4098c2276ae5f95801] <==
	W1019 16:34:10.129869       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:34:12.132651       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:34:12.136540       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:34:14.139815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:34:14.144638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:34:16.147856       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:34:16.152032       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:34:18.155276       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:34:18.160755       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:34:20.164497       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:34:20.168717       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:34:22.171974       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:34:22.175732       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:34:24.178925       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:34:24.182854       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:34:26.186084       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:34:26.190562       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:34:28.193555       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:34:28.198930       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:34:30.202509       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:34:30.206718       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:34:32.210453       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:34:32.215663       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:34:34.219171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:34:34.223516       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [b1a6aad7bccb7a4da8fd921d34e59bdef668c795f652e7e54ace1bc2adf761a6] <==
	I1019 16:28:13.955241       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1019 16:28:13.956728       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-507544 -n functional-507544
helpers_test.go:269: (dbg) Run:  kubectl --context functional-507544 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-mv5h7 mysql-5bb876957f-vgwqp sp-pod dashboard-metrics-scraper-77bf4d6c4c-v4d72 kubernetes-dashboard-855c9754f9-wqsqt
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-507544 describe pod busybox-mount hello-node-75c85bcc94-mv5h7 mysql-5bb876957f-vgwqp sp-pod dashboard-metrics-scraper-77bf4d6c4c-v4d72 kubernetes-dashboard-855c9754f9-wqsqt
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-507544 describe pod busybox-mount hello-node-75c85bcc94-mv5h7 mysql-5bb876957f-vgwqp sp-pod dashboard-metrics-scraper-77bf4d6c4c-v4d72 kubernetes-dashboard-855c9754f9-wqsqt: exit status 1 (104.93369ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-507544/192.168.49.2
	Start Time:       Sun, 19 Oct 2025 16:29:21 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.6
	IPs:
	  IP:  10.244.0.6
	Containers:
	  mount-munger:
	    Container ID:  cri-o://874c2ff14b97f5564fd2f8b7ea851875753a6ff2aca767743f234378aa18a8cf
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sun, 19 Oct 2025 16:29:24 +0000
	      Finished:     Sun, 19 Oct 2025 16:29:24 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pxtmk (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-pxtmk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  5m14s  default-scheduler  Successfully assigned default/busybox-mount to functional-507544
	  Normal  Pulling    5m14s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m11s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 846ms (2.459s including waiting). Image size: 4631262 bytes.
	  Normal  Created    5m11s  kubelet            Created container: mount-munger
	  Normal  Started    5m11s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-mv5h7
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-507544/192.168.49.2
	Start Time:       Sun, 19 Oct 2025 16:29:18 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n469f (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-n469f:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  5m17s                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-mv5h7 to functional-507544
	  Warning  Failed     5m1s (x2 over 5m17s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     5m1s (x2 over 5m17s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m49s (x2 over 5m16s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m49s (x2 over 5m16s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    4m35s (x3 over 5m17s)  kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             mysql-5bb876957f-vgwqp
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-507544/192.168.49.2
	Start Time:       Sun, 19 Oct 2025 16:29:36 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s558z (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-s558z:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  4m59s  default-scheduler  Successfully assigned default/mysql-5bb876957f-vgwqp to functional-507544
	  Normal  Pulling    4m59s  kubelet            Pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-507544/192.168.49.2
	Start Time:       Sun, 19 Oct 2025 16:29:35 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q8m6v (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-q8m6v:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age               From               Message
	  ----     ------     ----              ----               -------
	  Normal   Scheduled  5m                default-scheduler  Successfully assigned default/sp-pod to functional-507544
	  Warning  Failed     86s               kubelet            Failed to pull image "docker.io/nginx": unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     86s               kubelet            Error: ErrImagePull
	  Normal   BackOff    86s               kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     86s               kubelet            Error: ImagePullBackOff
	  Normal   Pulling    73s (x2 over 5m)  kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-v4d72" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-wqsqt" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-507544 describe pod busybox-mount hello-node-75c85bcc94-mv5h7 mysql-5bb876957f-vgwqp sp-pod dashboard-metrics-scraper-77bf4d6c4c-v4d72 kubernetes-dashboard-855c9754f9-wqsqt: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (302.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (602.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-507544 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-507544 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-cx7lv" [661b3d8c-f0db-4f13-85dc-d37412da52f9] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-507544 -n functional-507544
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-10-19 16:44:36.156549665 +0000 UTC m=+1461.211815967
functional_test.go:1645: (dbg) Run:  kubectl --context functional-507544 describe po hello-node-connect-7d85dfc575-cx7lv -n default
functional_test.go:1645: (dbg) kubectl --context functional-507544 describe po hello-node-connect-7d85dfc575-cx7lv -n default:
Name:             hello-node-connect-7d85dfc575-cx7lv
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-507544/192.168.49.2
Start Time:       Sun, 19 Oct 2025 16:34:35 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.12
IPs:
IP:           10.244.0.12
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qqmbb (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-qqmbb:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-cx7lv to functional-507544
Normal   Pulling    2m49s (x3 over 10m)  kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     39s (x3 over 6m21s)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     39s (x3 over 6m21s)  kubelet            Error: ErrImagePull
Normal   BackOff    0s (x5 over 6m21s)   kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     0s (x5 over 6m21s)   kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-507544 logs hello-node-connect-7d85dfc575-cx7lv -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-507544 logs hello-node-connect-7d85dfc575-cx7lv -n default: exit status 1 (73.2455ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-cx7lv" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-507544 logs hello-node-connect-7d85dfc575-cx7lv -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-507544 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-cx7lv
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-507544/192.168.49.2
Start Time:       Sun, 19 Oct 2025 16:34:35 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.12
IPs:
IP:           10.244.0.12
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qqmbb (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-qqmbb:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-cx7lv to functional-507544
Normal   Pulling    2m49s (x3 over 10m)  kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     39s (x3 over 6m21s)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     39s (x3 over 6m21s)  kubelet            Error: ErrImagePull
Normal   BackOff    0s (x5 over 6m21s)   kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     0s (x5 over 6m21s)   kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-507544 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-507544 logs -l app=hello-node-connect: exit status 1 (64.812403ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-cx7lv" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-507544 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-507544 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.110.88.91
IPs:                      10.110.88.91
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32165/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-507544
helpers_test.go:243: (dbg) docker inspect functional-507544:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "add43b7ec9e00c44964e724714f8e6fd86a5a7f5ea20fd7752ea76e8ee9a7112",
	        "Created": "2025-10-19T16:26:45.122198852Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 31618,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T16:26:45.156822472Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/add43b7ec9e00c44964e724714f8e6fd86a5a7f5ea20fd7752ea76e8ee9a7112/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/add43b7ec9e00c44964e724714f8e6fd86a5a7f5ea20fd7752ea76e8ee9a7112/hostname",
	        "HostsPath": "/var/lib/docker/containers/add43b7ec9e00c44964e724714f8e6fd86a5a7f5ea20fd7752ea76e8ee9a7112/hosts",
	        "LogPath": "/var/lib/docker/containers/add43b7ec9e00c44964e724714f8e6fd86a5a7f5ea20fd7752ea76e8ee9a7112/add43b7ec9e00c44964e724714f8e6fd86a5a7f5ea20fd7752ea76e8ee9a7112-json.log",
	        "Name": "/functional-507544",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-507544:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-507544",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "add43b7ec9e00c44964e724714f8e6fd86a5a7f5ea20fd7752ea76e8ee9a7112",
	                "LowerDir": "/var/lib/docker/overlay2/8980f1af902bf50d500cfed38df55e903ab2a49d3d08bd2b7c474ec81388d736-init/diff:/var/lib/docker/overlay2/0516a6de822f96a429e000bb6523437ca93a66a6aecaba561c6abf45d0d1defe/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8980f1af902bf50d500cfed38df55e903ab2a49d3d08bd2b7c474ec81388d736/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8980f1af902bf50d500cfed38df55e903ab2a49d3d08bd2b7c474ec81388d736/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8980f1af902bf50d500cfed38df55e903ab2a49d3d08bd2b7c474ec81388d736/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-507544",
	                "Source": "/var/lib/docker/volumes/functional-507544/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-507544",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-507544",
	                "name.minikube.sigs.k8s.io": "functional-507544",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "08d99069f25412728f0de1767e870ab3d5e37b37c9be1e53bb041c372f124f33",
	            "SandboxKey": "/var/run/docker/netns/08d99069f254",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-507544": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9e:97:3c:cd:99:b6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "190b1e4dbc4b84704bafdf33f14b0c728242ffe12133f3a7d8f637228926fb2b",
	                    "EndpointID": "9fefd75494de70091fe4fdf69631663c727e7236d9987ed273d484d28bb5b3f9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-507544",
	                        "add43b7ec9e0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-507544 -n functional-507544
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-507544 logs -n 25: (1.265705758s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-507544 ssh sudo cat /etc/test/nested/copy/7228/hosts                                                                                                 │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ image          │ functional-507544 image ls                                                                                                                                      │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ dashboard      │ --url --port 36195 -p functional-507544 --alsologtostderr -v=1                                                                                                  │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │                     │
	│ image          │ functional-507544 image load --daemon kicbase/echo-server:functional-507544 --alsologtostderr                                                                   │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ image          │ functional-507544 image ls                                                                                                                                      │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ image          │ functional-507544 image save kicbase/echo-server:functional-507544 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ image          │ functional-507544 image rm kicbase/echo-server:functional-507544 --alsologtostderr                                                                              │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ image          │ functional-507544 image ls                                                                                                                                      │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ image          │ functional-507544 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ image          │ functional-507544 image save --daemon kicbase/echo-server:functional-507544 --alsologtostderr                                                                   │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ update-context │ functional-507544 update-context --alsologtostderr -v=2                                                                                                         │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:35 UTC │ 19 Oct 25 16:35 UTC │
	│ update-context │ functional-507544 update-context --alsologtostderr -v=2                                                                                                         │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:35 UTC │ 19 Oct 25 16:35 UTC │
	│ update-context │ functional-507544 update-context --alsologtostderr -v=2                                                                                                         │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:35 UTC │ 19 Oct 25 16:35 UTC │
	│ image          │ functional-507544 image ls --format short --alsologtostderr                                                                                                     │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:35 UTC │ 19 Oct 25 16:35 UTC │
	│ image          │ functional-507544 image ls --format json --alsologtostderr                                                                                                      │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:35 UTC │ 19 Oct 25 16:35 UTC │
	│ image          │ functional-507544 image ls --format table --alsologtostderr                                                                                                     │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:35 UTC │ 19 Oct 25 16:35 UTC │
	│ image          │ functional-507544 image ls --format yaml --alsologtostderr                                                                                                      │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:35 UTC │ 19 Oct 25 16:35 UTC │
	│ ssh            │ functional-507544 ssh pgrep buildkitd                                                                                                                           │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:35 UTC │                     │
	│ image          │ functional-507544 image build -t localhost/my-image:functional-507544 testdata/build --alsologtostderr                                                          │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:35 UTC │ 19 Oct 25 16:35 UTC │
	│ image          │ functional-507544 image ls                                                                                                                                      │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:35 UTC │ 19 Oct 25 16:35 UTC │
	│ service        │ functional-507544 service list                                                                                                                                  │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:39 UTC │ 19 Oct 25 16:39 UTC │
	│ service        │ functional-507544 service list -o json                                                                                                                          │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:39 UTC │ 19 Oct 25 16:39 UTC │
	│ service        │ functional-507544 service --namespace=default --https --url hello-node                                                                                          │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:39 UTC │                     │
	│ service        │ functional-507544 service hello-node --url --format={{.IP}}                                                                                                     │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:39 UTC │                     │
	│ service        │ functional-507544 service hello-node --url                                                                                                                      │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:39 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 16:29:30
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 16:29:30.118385   44149 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:29:30.118477   44149 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:29:30.118484   44149 out.go:374] Setting ErrFile to fd 2...
	I1019 16:29:30.118488   44149 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:29:30.118804   44149 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3731/.minikube/bin
	I1019 16:29:30.119280   44149 out.go:368] Setting JSON to false
	I1019 16:29:30.120292   44149 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":716,"bootTime":1760890654,"procs":246,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 16:29:30.120370   44149 start.go:143] virtualization: kvm guest
	I1019 16:29:30.122096   44149 out.go:179] * [functional-507544] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1019 16:29:30.123412   44149 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 16:29:30.123412   44149 notify.go:221] Checking for updates...
	I1019 16:29:30.124663   44149 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 16:29:30.125918   44149 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-3731/kubeconfig
	I1019 16:29:30.127440   44149 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-3731/.minikube
	I1019 16:29:30.128697   44149 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 16:29:30.130309   44149 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 16:29:30.131905   44149 config.go:182] Loaded profile config "functional-507544": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:29:30.132380   44149 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 16:29:30.156613   44149 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1019 16:29:30.156707   44149 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 16:29:30.214136   44149 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-19 16:29:30.20345749 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 16:29:30.214295   44149 docker.go:319] overlay module found
	I1019 16:29:30.216279   44149 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1019 16:29:30.217643   44149 start.go:309] selected driver: docker
	I1019 16:29:30.217661   44149 start.go:930] validating driver "docker" against &{Name:functional-507544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-507544 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 16:29:30.217741   44149 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 16:29:30.219580   44149 out.go:203] 
	W1019 16:29:30.220824   44149 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1019 16:29:30.221973   44149 out.go:203] 
	
	
	==> CRI-O <==
	Oct 19 16:43:37 functional-507544 crio[3576]: time="2025-10-19T16:43:37.148374776Z" level=info msg="Image docker.io/mysql:5.7 not found" id=29de80e3-c6a3-43c3-9ca5-163ba55afa55 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:43:37 functional-507544 crio[3576]: time="2025-10-19T16:43:37.148417069Z" level=info msg="Neither image nor artfiact docker.io/mysql:5.7 found" id=29de80e3-c6a3-43c3-9ca5-163ba55afa55 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:43:50 functional-507544 crio[3576]: time="2025-10-19T16:43:50.148465827Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=e7ece416-9527-485c-993e-2e7a8ec36b3d name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:43:50 functional-507544 crio[3576]: time="2025-10-19T16:43:50.148649188Z" level=info msg="Image docker.io/mysql:5.7 not found" id=e7ece416-9527-485c-993e-2e7a8ec36b3d name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:43:50 functional-507544 crio[3576]: time="2025-10-19T16:43:50.148705789Z" level=info msg="Neither image nor artfiact docker.io/mysql:5.7 found" id=e7ece416-9527-485c-993e-2e7a8ec36b3d name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:43:54 functional-507544 crio[3576]: time="2025-10-19T16:43:54.671972645Z" level=info msg="Trying to access \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Oct 19 16:43:57 functional-507544 crio[3576]: time="2025-10-19T16:43:57.081687333Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=8f103b87-333e-4854-9add-63ff649e5d2b name=/runtime.v1.ImageService/PullImage
	Oct 19 16:43:57 functional-507544 crio[3576]: time="2025-10-19T16:43:57.08247027Z" level=info msg="Pulling image: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=be3a8242-299a-4a80-b837-b2616d418650 name=/runtime.v1.ImageService/PullImage
	Oct 19 16:43:57 functional-507544 crio[3576]: time="2025-10-19T16:43:57.084014819Z" level=info msg="Trying to access \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Oct 19 16:44:03 functional-507544 crio[3576]: time="2025-10-19T16:44:03.148350028Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=961d1bfa-57b8-485f-8c21-20f62469439f name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:44:03 functional-507544 crio[3576]: time="2025-10-19T16:44:03.148476482Z" level=info msg="Image docker.io/mysql:5.7 not found" id=961d1bfa-57b8-485f-8c21-20f62469439f name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:44:03 functional-507544 crio[3576]: time="2025-10-19T16:44:03.148508696Z" level=info msg="Neither image nor artfiact docker.io/mysql:5.7 found" id=961d1bfa-57b8-485f-8c21-20f62469439f name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:44:09 functional-507544 crio[3576]: time="2025-10-19T16:44:09.148445294Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=67ec00a3-df34-45ac-aa99-89af6b15605e name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:44:09 functional-507544 crio[3576]: time="2025-10-19T16:44:09.148634346Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=67ec00a3-df34-45ac-aa99-89af6b15605e name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:44:09 functional-507544 crio[3576]: time="2025-10-19T16:44:09.148702768Z" level=info msg="Neither image nor artfiact docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c found" id=67ec00a3-df34-45ac-aa99-89af6b15605e name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:44:14 functional-507544 crio[3576]: time="2025-10-19T16:44:14.148158945Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=26b09827-99b4-4c9c-8b28-ea65cb5767f1 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:44:14 functional-507544 crio[3576]: time="2025-10-19T16:44:14.148327527Z" level=info msg="Image docker.io/mysql:5.7 not found" id=26b09827-99b4-4c9c-8b28-ea65cb5767f1 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:44:14 functional-507544 crio[3576]: time="2025-10-19T16:44:14.14836906Z" level=info msg="Neither image nor artfiact docker.io/mysql:5.7 found" id=26b09827-99b4-4c9c-8b28-ea65cb5767f1 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:44:21 functional-507544 crio[3576]: time="2025-10-19T16:44:21.148728842Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=995be58c-eac0-44a9-a5c8-36a68be85550 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:44:21 functional-507544 crio[3576]: time="2025-10-19T16:44:21.148904283Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=995be58c-eac0-44a9-a5c8-36a68be85550 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:44:21 functional-507544 crio[3576]: time="2025-10-19T16:44:21.148948033Z" level=info msg="Neither image nor artfiact docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c found" id=995be58c-eac0-44a9-a5c8-36a68be85550 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:44:27 functional-507544 crio[3576]: time="2025-10-19T16:44:27.728959123Z" level=info msg="Trying to access \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Oct 19 16:44:34 functional-507544 crio[3576]: time="2025-10-19T16:44:34.148770831Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=42135f17-d216-4f4d-85a5-241913132604 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:44:34 functional-507544 crio[3576]: time="2025-10-19T16:44:34.148982383Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=42135f17-d216-4f4d-85a5-241913132604 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:44:34 functional-507544 crio[3576]: time="2025-10-19T16:44:34.149025275Z" level=info msg="Neither image nor artfiact docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c found" id=42135f17-d216-4f4d-85a5-241913132604 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	874c2ff14b97f       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   15 minutes ago      Exited              mount-munger              0                   7580ba9d2c75e       busybox-mount                               default
	f909fd2f1f12b       docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e       15 minutes ago      Running             nginx                     0                   0136c3b1d2067       nginx-svc                                   default
	1d5ff7dca36a9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      15 minutes ago      Running             storage-provisioner       2                   cb2b6d0554192       storage-provisioner                         kube-system
	bd130c1088bf7       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      15 minutes ago      Running             kube-apiserver            0                   f1c19208fe76e       kube-apiserver-functional-507544            kube-system
	2996708114c16       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      15 minutes ago      Running             kube-scheduler            1                   3a97633bdec36       kube-scheduler-functional-507544            kube-system
	6f9d68db6d3d6       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      15 minutes ago      Running             kube-controller-manager   1                   c167d44548f0f       kube-controller-manager-functional-507544   kube-system
	232b3554efed3       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      15 minutes ago      Running             etcd                      1                   2c3aea9080115       etcd-functional-507544                      kube-system
	b1a6aad7bccb7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      16 minutes ago      Exited              storage-provisioner       1                   cb2b6d0554192       storage-provisioner                         kube-system
	a6ac3c40d3695       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      16 minutes ago      Running             coredns                   1                   3cd7aae4f81be       coredns-66bc5c9577-z4xwl                    kube-system
	d105c28b8985b       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      16 minutes ago      Running             kube-proxy                1                   c4828939376a2       kube-proxy-rwnpm                            kube-system
	10025537dc3ab       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      16 minutes ago      Running             kindnet-cni               1                   e9657439b413e       kindnet-mvc2p                               kube-system
	3cdd80e1fd14a       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      16 minutes ago      Exited              coredns                   0                   3cd7aae4f81be       coredns-66bc5c9577-z4xwl                    kube-system
	c4f6db9ff93af       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      17 minutes ago      Exited              kube-proxy                0                   c4828939376a2       kube-proxy-rwnpm                            kube-system
	8fcdf9dec3441       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      17 minutes ago      Exited              kindnet-cni               0                   e9657439b413e       kindnet-mvc2p                               kube-system
	c992a57ea0446       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      17 minutes ago      Exited              etcd                      0                   2c3aea9080115       etcd-functional-507544                      kube-system
	61c5eb9cc5552       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      17 minutes ago      Exited              kube-scheduler            0                   3a97633bdec36       kube-scheduler-functional-507544            kube-system
	5886e461a7b4a       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      17 minutes ago      Exited              kube-controller-manager   0                   c167d44548f0f       kube-controller-manager-functional-507544   kube-system
	
	
	==> coredns [3cdd80e1fd14ae4035e4531b71f7296d418ddaf9b3f2faf0be415a36ca1d2613] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53096 - 64513 "HINFO IN 5388708912172919926.3799386503118995872. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.101589867s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a6ac3c40d36958a6de862293b761e2d75aeab587854de7d49bfae769f19fb001] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58675 - 8925 "HINFO IN 3256408165590317714.5502113091922773666. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.058726274s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               functional-507544
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-507544
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34
	                    minikube.k8s.io/name=functional-507544
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T16_27_02_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 16:26:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-507544
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 16:44:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 16:43:33 +0000   Sun, 19 Oct 2025 16:26:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 16:43:33 +0000   Sun, 19 Oct 2025 16:26:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 16:43:33 +0000   Sun, 19 Oct 2025 16:26:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 16:43:33 +0000   Sun, 19 Oct 2025 16:27:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-507544
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                8e1900d7-85d2-490a-a8f6-e11dcc838551
	  Boot ID:                    74c6737e-3c93-45ee-b810-4373d2d70b9b
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-mv5h7                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  default                     hello-node-connect-7d85dfc575-cx7lv           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-vgwqp                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     15m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 coredns-66bc5c9577-z4xwl                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     17m
	  kube-system                 etcd-functional-507544                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         17m
	  kube-system                 kindnet-mvc2p                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      17m
	  kube-system                 kube-apiserver-functional-507544              250m (3%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-functional-507544     200m (2%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-rwnpm                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-functional-507544              100m (1%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-v4d72    0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-wqsqt         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 17m                kube-proxy       
	  Normal  Starting                 15m                kube-proxy       
	  Normal  Starting                 17m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  17m (x8 over 17m)  kubelet          Node functional-507544 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m (x8 over 17m)  kubelet          Node functional-507544 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m (x8 over 17m)  kubelet          Node functional-507544 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    17m                kubelet          Node functional-507544 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  17m                kubelet          Node functional-507544 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     17m                kubelet          Node functional-507544 status is now: NodeHasSufficientPID
	  Normal  Starting                 17m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           17m                node-controller  Node functional-507544 event: Registered Node functional-507544 in Controller
	  Normal  NodeReady                16m                kubelet          Node functional-507544 status is now: NodeReady
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node functional-507544 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node functional-507544 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x8 over 16m)  kubelet          Node functional-507544 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                node-controller  Node functional-507544 event: Registered Node functional-507544 in Controller
	
	
	==> dmesg <==
	[  +0.102617] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027869] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.458770] kauditd_printk_skb: 47 callbacks suppressed
	[Oct19 16:23] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.054620] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023908] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023878] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023848] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023943] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +2.047764] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +4.031517] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +8.127172] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[ +16.382276] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[Oct19 16:24] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	
	
	==> etcd [232b3554efed3e895da66259cb238b385489998d30cbf9437a17db6348583118] <==
	{"level":"warn","ts":"2025-10-19T16:28:47.180801Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.187323Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.193536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.201096Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.209599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.216364Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.222822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.230208Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.237492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.250556Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.257478Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.264056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.270509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.277016Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.283231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.295941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.303147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.309595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.356254Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39636","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-19T16:38:46.878301Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1030}
	{"level":"info","ts":"2025-10-19T16:38:46.898623Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1030,"took":"19.934294ms","hash":3149245820,"current-db-size-bytes":3330048,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":1458176,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2025-10-19T16:38:46.898684Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3149245820,"revision":1030,"compact-revision":-1}
	{"level":"info","ts":"2025-10-19T16:43:46.884408Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1345}
	{"level":"info","ts":"2025-10-19T16:43:46.887592Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1345,"took":"2.837133ms","hash":862486959,"current-db-size-bytes":3330048,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":2207744,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2025-10-19T16:43:46.887632Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":862486959,"revision":1345,"compact-revision":1030}
	
	
	==> etcd [c992a57ea0446745a96ba364917b05db213b70a8a91f3769afc9ebce3fdf3850] <==
	{"level":"warn","ts":"2025-10-19T16:26:58.808666Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:26:58.814744Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:26:58.831321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:26:58.834820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:26:58.840818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:26:58.846932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:26:58.890385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52952","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-19T16:28:24.086788Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-19T16:28:24.086876Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-507544","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-19T16:28:24.086990Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-19T16:28:24.088460Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-19T16:28:24.088545Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-19T16:28:24.088577Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"warn","ts":"2025-10-19T16:28:24.088600Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-19T16:28:24.088642Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-19T16:28:24.088649Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-19T16:28:24.088644Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-19T16:28:24.088665Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-19T16:28:24.088664Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-19T16:28:24.088696Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-19T16:28:24.088706Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-19T16:28:24.090695Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-19T16:28:24.090752Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-19T16:28:24.090785Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-19T16:28:24.090794Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-507544","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 16:44:37 up 27 min,  0 user,  load average: 0.12, 0.15, 0.28
	Linux functional-507544 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [10025537dc3abc6d9b1d66732463d3b2b95aa6fe95fe6ce8440ffb8252db820f] <==
	I1019 16:42:34.388355       1 main.go:301] handling current node
	I1019 16:42:44.387655       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:42:44.387694       1 main.go:301] handling current node
	I1019 16:42:54.387899       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:42:54.387980       1 main.go:301] handling current node
	I1019 16:43:04.387671       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:43:04.387711       1 main.go:301] handling current node
	I1019 16:43:14.390442       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:43:14.390487       1 main.go:301] handling current node
	I1019 16:43:24.387790       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:43:24.387831       1 main.go:301] handling current node
	I1019 16:43:34.387695       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:43:34.387754       1 main.go:301] handling current node
	I1019 16:43:44.390552       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:43:44.390591       1 main.go:301] handling current node
	I1019 16:43:54.388615       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:43:54.388681       1 main.go:301] handling current node
	I1019 16:44:04.388746       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:44:04.388784       1 main.go:301] handling current node
	I1019 16:44:14.390187       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:44:14.390236       1 main.go:301] handling current node
	I1019 16:44:24.388259       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:44:24.388302       1 main.go:301] handling current node
	I1019 16:44:34.397386       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:44:34.397422       1 main.go:301] handling current node
	
	
	==> kindnet [8fcdf9dec34411c1d2d1bdbf7f4262661b43f3d0d72fdbb514c24d6552cb6e4f] <==
	I1019 16:27:07.894506       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 16:27:07.894803       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1019 16:27:07.894944       1 main.go:148] setting mtu 1500 for CNI 
	I1019 16:27:07.894962       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 16:27:07.894984       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T16:27:08Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 16:27:08.096111       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 16:27:08.096411       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 16:27:08.096445       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 16:27:08.096616       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1019 16:27:38.097604       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1019 16:27:38.097600       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1019 16:27:38.097611       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1019 16:27:38.097604       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1019 16:27:39.696719       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 16:27:39.696755       1 metrics.go:72] Registering metrics
	I1019 16:27:39.696818       1 controller.go:711] "Syncing nftables rules"
	I1019 16:27:48.104019       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:27:48.104059       1 main.go:301] handling current node
	I1019 16:27:58.100727       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:27:58.100769       1 main.go:301] handling current node
	I1019 16:28:08.100413       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:28:08.100450       1 main.go:301] handling current node
	
	
	==> kube-apiserver [bd130c1088bf7d1c124730d74a847b71c9821cf326a67aad1216cb07a547b96a] <==
	I1019 16:28:47.811965       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1019 16:28:47.816819       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1019 16:28:47.816846       1 policy_source.go:240] refreshing policies
	I1019 16:28:47.827112       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 16:28:48.710386       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1019 16:28:49.017195       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1019 16:28:49.018467       1 controller.go:667] quota admission added evaluator for: endpoints
	I1019 16:28:49.023654       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 16:28:49.241858       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 16:28:49.241858       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 16:28:49.523461       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1019 16:28:49.620178       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1019 16:28:49.673977       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 16:28:49.680303       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 16:28:51.304578       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1019 16:29:14.725009       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.108.56.157"}
	I1019 16:29:18.524389       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.109.202.206"}
	I1019 16:29:21.388791       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.98.23.83"}
	I1019 16:29:34.345873       1 controller.go:667] quota admission added evaluator for: namespaces
	I1019 16:29:34.466507       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.10.181"}
	I1019 16:29:34.481960       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.204.129"}
	E1019 16:29:34.615398       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:55370: use of closed network connection
	I1019 16:29:36.447894       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.100.19.161"}
	I1019 16:34:35.824119       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.110.88.91"}
	I1019 16:38:47.726123       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [5886e461a7b4a183fac570c337eba31221c4f8d80680f651e35b86312b3b4662] <==
	I1019 16:27:06.269895       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1019 16:27:06.270018       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1019 16:27:06.270100       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1019 16:27:06.270136       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1019 16:27:06.270243       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1019 16:27:06.270449       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1019 16:27:06.270505       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1019 16:27:06.270520       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1019 16:27:06.270493       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1019 16:27:06.270862       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1019 16:27:06.270975       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1019 16:27:06.271112       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1019 16:27:06.272386       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1019 16:27:06.273558       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1019 16:27:06.275820       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 16:27:06.278936       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1019 16:27:06.278986       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1019 16:27:06.279015       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1019 16:27:06.279024       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1019 16:27:06.279028       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1019 16:27:06.279131       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1019 16:27:06.284734       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="functional-507544" podCIDRs=["10.244.0.0/24"]
	I1019 16:27:06.285694       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 16:27:06.285701       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1019 16:27:51.226515       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [6f9d68db6d3d66d4b89c73b933115640996e2ad1584dc4733fec9eb8f8617cee] <==
	I1019 16:28:51.147320       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1019 16:28:51.147536       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1019 16:28:51.147633       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1019 16:28:51.148007       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1019 16:28:51.148025       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1019 16:28:51.149985       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1019 16:28:51.151271       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 16:28:51.152361       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1019 16:28:51.153569       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1019 16:28:51.153598       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1019 16:28:51.153610       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1019 16:28:51.153623       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1019 16:28:51.154771       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 16:28:51.154771       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1019 16:28:51.157958       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1019 16:28:51.160113       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1019 16:28:51.161272       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1019 16:28:51.163450       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1019 16:28:51.168958       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1019 16:29:34.405011       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1019 16:29:34.408954       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1019 16:29:34.412916       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1019 16:29:34.413153       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1019 16:29:34.416170       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1019 16:29:34.421693       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [c4f6db9ff93af54aa47d8330711b3aacaf2d4ed53868d324569d79661c893e86] <==
	I1019 16:27:07.757997       1 server_linux.go:53] "Using iptables proxy"
	I1019 16:27:07.829126       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 16:27:07.929844       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 16:27:07.929903       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1019 16:27:07.930034       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 16:27:07.948180       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 16:27:07.948245       1 server_linux.go:132] "Using iptables Proxier"
	I1019 16:27:07.953226       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 16:27:07.953625       1 server.go:527] "Version info" version="v1.34.1"
	I1019 16:27:07.953644       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 16:27:07.954969       1 config.go:309] "Starting node config controller"
	I1019 16:27:07.954988       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 16:27:07.954996       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 16:27:07.955009       1 config.go:200] "Starting service config controller"
	I1019 16:27:07.955013       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 16:27:07.955024       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 16:27:07.955040       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 16:27:07.955031       1 config.go:106] "Starting endpoint slice config controller"
	I1019 16:27:07.955093       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 16:27:08.055919       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1019 16:27:08.055962       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 16:27:08.055920       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [d105c28b8985b9425bc6dc11577787f84445b24a2501dbbf2c5479621ec7d4c5] <==
	E1019 16:28:14.055235       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-507544&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 16:28:14.925306       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-507544&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 16:28:17.992857       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-507544&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 16:28:21.964850       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-507544&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 16:28:42.435400       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-507544&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1019 16:29:05.054962       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 16:29:05.054999       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1019 16:29:05.055094       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 16:29:05.075205       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 16:29:05.075262       1 server_linux.go:132] "Using iptables Proxier"
	I1019 16:29:05.080962       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 16:29:05.081386       1 server.go:527] "Version info" version="v1.34.1"
	I1019 16:29:05.081402       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 16:29:05.082685       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 16:29:05.082707       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 16:29:05.082716       1 config.go:200] "Starting service config controller"
	I1019 16:29:05.082742       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 16:29:05.082836       1 config.go:106] "Starting endpoint slice config controller"
	I1019 16:29:05.082898       1 config.go:309] "Starting node config controller"
	I1019 16:29:05.082907       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 16:29:05.082920       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 16:29:05.083625       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 16:29:05.182940       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 16:29:05.183531       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 16:29:05.184724       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2996708114c16d776a6aff6f9fcc6dad7f1fef587753492e0a7a7981480bcf7c] <==
	I1019 16:28:46.382085       1 serving.go:386] Generated self-signed cert in-memory
	W1019 16:28:47.731959       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1019 16:28:47.732000       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1019 16:28:47.732014       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1019 16:28:47.732024       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1019 16:28:47.750471       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1019 16:28:47.750506       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 16:28:47.752862       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 16:28:47.752902       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 16:28:47.753175       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1019 16:28:47.753232       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1019 16:28:47.853949       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [61c5eb9cc5552cf24b68447b4ed0fbb9972f27fc884505108656b85504cc2ff2] <==
	E1019 16:26:59.303300       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1019 16:26:59.303338       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1019 16:26:59.303374       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1019 16:26:59.303445       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1019 16:26:59.303447       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1019 16:26:59.303520       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1019 16:26:59.303537       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1019 16:27:00.157669       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 16:27:00.187109       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1019 16:27:00.188168       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1019 16:27:00.235871       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1019 16:27:00.343078       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1019 16:27:00.351277       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1019 16:27:00.373492       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1019 16:27:00.375580       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1019 16:27:00.470255       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1019 16:27:00.471050       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1019 16:27:00.517324       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1019 16:27:00.899509       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 16:28:24.197112       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 16:28:24.197145       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1019 16:28:24.197196       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1019 16:28:24.197223       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1019 16:28:24.197236       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1019 16:28:24.197268       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 19 16:43:24 functional-507544 kubelet[4118]: E1019 16:43:24.026617    4118 kuberuntime_image.go:43] "Failed to pull image" err="short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list" image="kicbase/echo-server:latest"
	Oct 19 16:43:24 functional-507544 kubelet[4118]: E1019 16:43:24.026824    4118 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-75c85bcc94-mv5h7_default(e68c1276-cc7a-4036-91b9-ba15632cc2bf): ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list" logger="UnhandledError"
	Oct 19 16:43:24 functional-507544 kubelet[4118]: E1019 16:43:24.027099    4118 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-mv5h7" podUID="e68c1276-cc7a-4036-91b9-ba15632cc2bf"
	Oct 19 16:43:37 functional-507544 kubelet[4118]: E1019 16:43:37.148129    4118 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-mv5h7" podUID="e68c1276-cc7a-4036-91b9-ba15632cc2bf"
	Oct 19 16:43:37 functional-507544 kubelet[4118]: E1019 16:43:37.148766    4118 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://mysql:5.7: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-vgwqp" podUID="2a109cd2-82cc-4066-94eb-c9d0775cb362"
	Oct 19 16:43:50 functional-507544 kubelet[4118]: E1019 16:43:50.149033    4118 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://mysql:5.7: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-vgwqp" podUID="2a109cd2-82cc-4066-94eb-c9d0775cb362"
	Oct 19 16:43:52 functional-507544 kubelet[4118]: E1019 16:43:52.147603    4118 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-mv5h7" podUID="e68c1276-cc7a-4036-91b9-ba15632cc2bf"
	Oct 19 16:43:57 functional-507544 kubelet[4118]: E1019 16:43:57.081206    4118 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: provided artifact is a container image" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 19 16:43:57 functional-507544 kubelet[4118]: E1019 16:43:57.081272    4118 kuberuntime_image.go:43] "Failed to pull image" err="unable to pull image or OCI artifact: pull image err: initializing source docker://kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: provided artifact is a container image" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 19 16:43:57 functional-507544 kubelet[4118]: E1019 16:43:57.081559    4118 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-77bf4d6c4c-v4d72_kubernetes-dashboard(f72c10dc-e8cd-4faf-99bc-7c5d642cadce): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: provided artifact is a container image" logger="UnhandledError"
	Oct 19 16:43:57 functional-507544 kubelet[4118]: E1019 16:43:57.081627    4118 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: provided artifact is a container image\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-v4d72" podUID="f72c10dc-e8cd-4faf-99bc-7c5d642cadce"
	Oct 19 16:43:57 functional-507544 kubelet[4118]: E1019 16:43:57.082080    4118 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list" image="kicbase/echo-server:latest"
	Oct 19 16:43:57 functional-507544 kubelet[4118]: E1019 16:43:57.082131    4118 kuberuntime_image.go:43] "Failed to pull image" err="short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list" image="kicbase/echo-server:latest"
	Oct 19 16:43:57 functional-507544 kubelet[4118]: E1019 16:43:57.082347    4118 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-connect-7d85dfc575-cx7lv_default(661b3d8c-f0db-4f13-85dc-d37412da52f9): ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list" logger="UnhandledError"
	Oct 19 16:43:57 functional-507544 kubelet[4118]: E1019 16:43:57.083631    4118 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cx7lv" podUID="661b3d8c-f0db-4f13-85dc-d37412da52f9"
	Oct 19 16:44:03 functional-507544 kubelet[4118]: E1019 16:44:03.148803    4118 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://mysql:5.7: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-vgwqp" podUID="2a109cd2-82cc-4066-94eb-c9d0775cb362"
	Oct 19 16:44:04 functional-507544 kubelet[4118]: E1019 16:44:04.147860    4118 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-mv5h7" podUID="e68c1276-cc7a-4036-91b9-ba15632cc2bf"
	Oct 19 16:44:09 functional-507544 kubelet[4118]: E1019 16:44:09.148129    4118 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cx7lv" podUID="661b3d8c-f0db-4f13-85dc-d37412da52f9"
	Oct 19 16:44:09 functional-507544 kubelet[4118]: E1019 16:44:09.149089    4118 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: provided artifact is a container image\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-v4d72" podUID="f72c10dc-e8cd-4faf-99bc-7c5d642cadce"
	Oct 19 16:44:18 functional-507544 kubelet[4118]: E1019 16:44:18.148670    4118 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-mv5h7" podUID="e68c1276-cc7a-4036-91b9-ba15632cc2bf"
	Oct 19 16:44:21 functional-507544 kubelet[4118]: E1019 16:44:21.149371    4118 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: provided artifact is a container image\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-v4d72" podUID="f72c10dc-e8cd-4faf-99bc-7c5d642cadce"
	Oct 19 16:44:23 functional-507544 kubelet[4118]: E1019 16:44:23.148357    4118 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cx7lv" podUID="661b3d8c-f0db-4f13-85dc-d37412da52f9"
	Oct 19 16:44:33 functional-507544 kubelet[4118]: E1019 16:44:33.148130    4118 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-mv5h7" podUID="e68c1276-cc7a-4036-91b9-ba15632cc2bf"
	Oct 19 16:44:34 functional-507544 kubelet[4118]: E1019 16:44:34.149395    4118 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: provided artifact is a container image\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-v4d72" podUID="f72c10dc-e8cd-4faf-99bc-7c5d642cadce"
	Oct 19 16:44:36 functional-507544 kubelet[4118]: E1019 16:44:36.148332    4118 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cx7lv" podUID="661b3d8c-f0db-4f13-85dc-d37412da52f9"
	
	
	==> storage-provisioner [1d5ff7dca36a908853bf80f1bdc7a8189fcd31580bdeff4098c2276ae5f95801] <==
	W1019 16:44:12.532943       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:44:14.536368       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:44:14.540896       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:44:16.544574       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:44:16.548858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:44:18.551725       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:44:18.556135       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:44:20.559691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:44:20.563717       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:44:22.567238       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:44:22.572464       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:44:24.576143       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:44:24.580113       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:44:26.583001       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:44:26.588112       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:44:28.591927       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:44:28.596683       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:44:30.599726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:44:30.603616       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:44:32.607214       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:44:32.611580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:44:34.614637       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:44:34.619388       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:44:36.622864       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:44:36.627696       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [b1a6aad7bccb7a4da8fd921d34e59bdef668c795f652e7e54ace1bc2adf761a6] <==
	I1019 16:28:13.955241       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1019 16:28:13.956728       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-507544 -n functional-507544
helpers_test.go:269: (dbg) Run:  kubectl --context functional-507544 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-mv5h7 hello-node-connect-7d85dfc575-cx7lv mysql-5bb876957f-vgwqp sp-pod dashboard-metrics-scraper-77bf4d6c4c-v4d72 kubernetes-dashboard-855c9754f9-wqsqt
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-507544 describe pod busybox-mount hello-node-75c85bcc94-mv5h7 hello-node-connect-7d85dfc575-cx7lv mysql-5bb876957f-vgwqp sp-pod dashboard-metrics-scraper-77bf4d6c4c-v4d72 kubernetes-dashboard-855c9754f9-wqsqt
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-507544 describe pod busybox-mount hello-node-75c85bcc94-mv5h7 hello-node-connect-7d85dfc575-cx7lv mysql-5bb876957f-vgwqp sp-pod dashboard-metrics-scraper-77bf4d6c4c-v4d72 kubernetes-dashboard-855c9754f9-wqsqt: exit status 1 (100.72208ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-507544/192.168.49.2
	Start Time:       Sun, 19 Oct 2025 16:29:21 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.6
	IPs:
	  IP:  10.244.0.6
	Containers:
	  mount-munger:
	    Container ID:  cri-o://874c2ff14b97f5564fd2f8b7ea851875753a6ff2aca767743f234378aa18a8cf
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sun, 19 Oct 2025 16:29:24 +0000
	      Finished:     Sun, 19 Oct 2025 16:29:24 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pxtmk (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-pxtmk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  15m   default-scheduler  Successfully assigned default/busybox-mount to functional-507544
	  Normal  Pulling    15m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     15m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 846ms (2.459s including waiting). Image size: 4631262 bytes.
	  Normal  Created    15m   kubelet            Created container: mount-munger
	  Normal  Started    15m   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-mv5h7
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-507544/192.168.49.2
	Start Time:       Sun, 19 Oct 2025 16:29:18 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n469f (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-n469f:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  15m                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-mv5h7 to functional-507544
	  Normal   Pulling    3m21s (x5 over 15m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     74s (x5 over 15m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     74s (x5 over 15m)    kubelet            Error: ErrImagePull
	  Normal   BackOff    5s (x16 over 15m)    kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     5s (x16 over 15m)    kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-cx7lv
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-507544/192.168.49.2
	Start Time:       Sun, 19 Oct 2025 16:34:35 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.12
	IPs:
	  IP:           10.244.0.12
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qqmbb (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-qqmbb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-cx7lv to functional-507544
	  Normal   Pulling    2m51s (x3 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     41s (x3 over 6m23s)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     41s (x3 over 6m23s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    2s (x5 over 6m23s)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     2s (x5 over 6m23s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             mysql-5bb876957f-vgwqp
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-507544/192.168.49.2
	Start Time:       Sun, 19 Oct 2025 16:29:36 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:           10.244.0.11
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s558z (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-s558z:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  15m                    default-scheduler  Successfully assigned default/mysql-5bb876957f-vgwqp to functional-507544
	  Warning  Failed     4m51s (x2 over 9m58s)  kubelet            Failed to pull image "docker.io/mysql:5.7": unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://mysql:5.7: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     74s (x3 over 9m58s)    kubelet            Error: ErrImagePull
	  Warning  Failed     74s                    kubelet            Failed to pull image "docker.io/mysql:5.7": unable to pull image or OCI artifact: pull image err: initializing source docker://mysql:5.7: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    35s (x5 over 9m57s)    kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     35s (x5 over 9m57s)    kubelet            Error: ImagePullBackOff
	  Normal   Pulling    24s (x4 over 15m)      kubelet            Pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-507544/192.168.49.2
	Start Time:       Sun, 19 Oct 2025 16:29:35 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q8m6v (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-q8m6v:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  15m                  default-scheduler  Successfully assigned default/sp-pod to functional-507544
	  Warning  Failed     2m16s (x3 over 11m)  kubelet            Failed to pull image "docker.io/nginx": unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m16s (x3 over 11m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    98s (x5 over 11m)    kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     98s (x5 over 11m)    kubelet            Error: ImagePullBackOff
	  Normal   Pulling    87s (x4 over 15m)    kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-v4d72" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-wqsqt" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-507544 describe pod busybox-mount hello-node-75c85bcc94-mv5h7 hello-node-connect-7d85dfc575-cx7lv mysql-5bb876957f-vgwqp sp-pod dashboard-metrics-scraper-77bf4d6c4c-v4d72 kubernetes-dashboard-855c9754f9-wqsqt: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (602.93s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (377.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [1a99182e-17bd-48d5-8fd8-4f37599e20e9] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004406417s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-507544 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-507544 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-507544 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-507544 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [c41361db-2c04-432d-a44f-a792ea1f0ae0] Pending
helpers_test.go:352: "sp-pod" [c41361db-2c04-432d-a44f-a792ea1f0ae0] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [c41361db-2c04-432d-a44f-a792ea1f0ae0] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.004059731s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-507544 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-507544 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-507544 apply -f testdata/storage-provisioner/pod.yaml
I1019 16:29:35.520650    7228 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [f1eb7495-9064-4ffe-979e-857122179f13] Pending
helpers_test.go:352: "sp-pod" [f1eb7495-9064-4ffe-979e-857122179f13] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
functional_test_pvc_test.go:140: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 6m0s: context deadline exceeded ****
functional_test_pvc_test.go:140: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-507544 -n functional-507544
functional_test_pvc_test.go:140: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-10-19 16:35:35.843364426 +0000 UTC m=+920.898630726
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-507544 describe po sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-507544 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-507544/192.168.49.2
Start Time:       Sun, 19 Oct 2025 16:29:35 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.10
IPs:
IP:  10.244.0.10
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q8m6v (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-q8m6v:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                 From               Message
----     ------     ----                ----               -------
Normal   Scheduled  6m                  default-scheduler  Successfully assigned default/sp-pod to functional-507544
Warning  Failed     2m26s               kubelet            Failed to pull image "docker.io/nginx": unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     2m26s               kubelet            Error: ErrImagePull
Normal   BackOff    2m26s               kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     2m26s               kubelet            Error: ImagePullBackOff
Normal   Pulling    2m13s (x2 over 6m)  kubelet            Pulling image "docker.io/nginx"
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-507544 logs sp-pod -n default
functional_test_pvc_test.go:140: (dbg) Non-zero exit: kubectl --context functional-507544 logs sp-pod -n default: exit status 1 (73.991624ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:140: kubectl --context functional-507544 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:141: failed waiting for pvctest pod : test=storage-provisioner within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-507544
helpers_test.go:243: (dbg) docker inspect functional-507544:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "add43b7ec9e00c44964e724714f8e6fd86a5a7f5ea20fd7752ea76e8ee9a7112",
	        "Created": "2025-10-19T16:26:45.122198852Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 31618,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T16:26:45.156822472Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/add43b7ec9e00c44964e724714f8e6fd86a5a7f5ea20fd7752ea76e8ee9a7112/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/add43b7ec9e00c44964e724714f8e6fd86a5a7f5ea20fd7752ea76e8ee9a7112/hostname",
	        "HostsPath": "/var/lib/docker/containers/add43b7ec9e00c44964e724714f8e6fd86a5a7f5ea20fd7752ea76e8ee9a7112/hosts",
	        "LogPath": "/var/lib/docker/containers/add43b7ec9e00c44964e724714f8e6fd86a5a7f5ea20fd7752ea76e8ee9a7112/add43b7ec9e00c44964e724714f8e6fd86a5a7f5ea20fd7752ea76e8ee9a7112-json.log",
	        "Name": "/functional-507544",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-507544:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-507544",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "add43b7ec9e00c44964e724714f8e6fd86a5a7f5ea20fd7752ea76e8ee9a7112",
	                "LowerDir": "/var/lib/docker/overlay2/8980f1af902bf50d500cfed38df55e903ab2a49d3d08bd2b7c474ec81388d736-init/diff:/var/lib/docker/overlay2/0516a6de822f96a429e000bb6523437ca93a66a6aecaba561c6abf45d0d1defe/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8980f1af902bf50d500cfed38df55e903ab2a49d3d08bd2b7c474ec81388d736/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8980f1af902bf50d500cfed38df55e903ab2a49d3d08bd2b7c474ec81388d736/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8980f1af902bf50d500cfed38df55e903ab2a49d3d08bd2b7c474ec81388d736/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-507544",
	                "Source": "/var/lib/docker/volumes/functional-507544/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-507544",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-507544",
	                "name.minikube.sigs.k8s.io": "functional-507544",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "08d99069f25412728f0de1767e870ab3d5e37b37c9be1e53bb041c372f124f33",
	            "SandboxKey": "/var/run/docker/netns/08d99069f254",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-507544": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9e:97:3c:cd:99:b6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "190b1e4dbc4b84704bafdf33f14b0c728242ffe12133f3a7d8f637228926fb2b",
	                    "EndpointID": "9fefd75494de70091fe4fdf69631663c727e7236d9987ed273d484d28bb5b3f9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-507544",
	                        "add43b7ec9e0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-507544 -n functional-507544
helpers_test.go:252: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-507544 logs -n 25: (1.303153506s)
helpers_test.go:260: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh       │ functional-507544 ssh sudo systemctl is-active containerd                                                                                                       │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │                     │
	│ ssh       │ functional-507544 ssh findmnt -T /mount2                                                                                                                        │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ ssh       │ functional-507544 ssh findmnt -T /mount3                                                                                                                        │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ mount     │ -p functional-507544 --kill=true                                                                                                                                │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │                     │
	│ addons    │ functional-507544 addons list                                                                                                                                   │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ addons    │ functional-507544 addons list -o json                                                                                                                           │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ ssh       │ functional-507544 ssh sudo cat /etc/ssl/certs/7228.pem                                                                                                          │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ ssh       │ functional-507544 ssh sudo cat /usr/share/ca-certificates/7228.pem                                                                                              │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ image     │ functional-507544 image load --daemon kicbase/echo-server:functional-507544 --alsologtostderr                                                                   │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ ssh       │ functional-507544 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                        │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ ssh       │ functional-507544 ssh sudo cat /etc/ssl/certs/72282.pem                                                                                                         │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ image     │ functional-507544 image ls                                                                                                                                      │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ ssh       │ functional-507544 ssh sudo cat /usr/share/ca-certificates/72282.pem                                                                                             │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ image     │ functional-507544 image load --daemon kicbase/echo-server:functional-507544 --alsologtostderr                                                                   │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ ssh       │ functional-507544 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ ssh       │ functional-507544 ssh sudo cat /etc/test/nested/copy/7228/hosts                                                                                                 │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ image     │ functional-507544 image ls                                                                                                                                      │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ dashboard │ --url --port 36195 -p functional-507544 --alsologtostderr -v=1                                                                                                  │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │                     │
	│ image     │ functional-507544 image load --daemon kicbase/echo-server:functional-507544 --alsologtostderr                                                                   │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ image     │ functional-507544 image ls                                                                                                                                      │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ image     │ functional-507544 image save kicbase/echo-server:functional-507544 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ image     │ functional-507544 image rm kicbase/echo-server:functional-507544 --alsologtostderr                                                                              │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ image     │ functional-507544 image ls                                                                                                                                      │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ image     │ functional-507544 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ image     │ functional-507544 image save --daemon kicbase/echo-server:functional-507544 --alsologtostderr                                                                   │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 16:29:30
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 16:29:30.118385   44149 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:29:30.118477   44149 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:29:30.118484   44149 out.go:374] Setting ErrFile to fd 2...
	I1019 16:29:30.118488   44149 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:29:30.118804   44149 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3731/.minikube/bin
	I1019 16:29:30.119280   44149 out.go:368] Setting JSON to false
	I1019 16:29:30.120292   44149 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":716,"bootTime":1760890654,"procs":246,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 16:29:30.120370   44149 start.go:143] virtualization: kvm guest
	I1019 16:29:30.122096   44149 out.go:179] * [functional-507544] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1019 16:29:30.123412   44149 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 16:29:30.123412   44149 notify.go:221] Checking for updates...
	I1019 16:29:30.124663   44149 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 16:29:30.125918   44149 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-3731/kubeconfig
	I1019 16:29:30.127440   44149 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-3731/.minikube
	I1019 16:29:30.128697   44149 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 16:29:30.130309   44149 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 16:29:30.131905   44149 config.go:182] Loaded profile config "functional-507544": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:29:30.132380   44149 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 16:29:30.156613   44149 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1019 16:29:30.156707   44149 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 16:29:30.214136   44149 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-19 16:29:30.20345749 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 16:29:30.214295   44149 docker.go:319] overlay module found
	I1019 16:29:30.216279   44149 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1019 16:29:30.217643   44149 start.go:309] selected driver: docker
	I1019 16:29:30.217661   44149 start.go:930] validating driver "docker" against &{Name:functional-507544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-507544 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 16:29:30.217741   44149 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 16:29:30.219580   44149 out.go:203] 
	W1019 16:29:30.220824   44149 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1019 16:29:30.221973   44149 out.go:203] 
	
	
	==> CRI-O <==
	Oct 19 16:32:22 functional-507544 crio[3576]: time="2025-10-19T16:32:22.148227283Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=e9015535-24b0-4569-b4a8-1e429e265541 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:32:22 functional-507544 crio[3576]: time="2025-10-19T16:32:22.148419886Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=e9015535-24b0-4569-b4a8-1e429e265541 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:32:22 functional-507544 crio[3576]: time="2025-10-19T16:32:22.148465196Z" level=info msg="Neither image nor artfiact docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 found" id=e9015535-24b0-4569-b4a8-1e429e265541 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:32:38 functional-507544 crio[3576]: time="2025-10-19T16:32:38.503578793Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Oct 19 16:33:09 functional-507544 crio[3576]: time="2025-10-19T16:33:09.152168719Z" level=info msg="Pulling image: docker.io/mysql:5.7" id=50ddebf5-dd5e-43bb-a11a-2e6f74361ffc name=/runtime.v1.ImageService/PullImage
	Oct 19 16:33:09 functional-507544 crio[3576]: time="2025-10-19T16:33:09.15648869Z" level=info msg="Trying to access \"docker.io/library/mysql:5.7\""
	Oct 19 16:34:10 functional-507544 crio[3576]: time="2025-10-19T16:34:10.24769623Z" level=info msg="Trying to access \"docker.io/library/mysql:5.7\""
	Oct 19 16:34:36 functional-507544 crio[3576]: time="2025-10-19T16:34:36.06979907Z" level=info msg="Running pod sandbox: default/hello-node-connect-7d85dfc575-cx7lv/POD" id=806c9f55-7cc0-415b-9904-8dee989353e7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 16:34:36 functional-507544 crio[3576]: time="2025-10-19T16:34:36.06990532Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 16:34:36 functional-507544 crio[3576]: time="2025-10-19T16:34:36.07497072Z" level=info msg="Got pod network &{Name:hello-node-connect-7d85dfc575-cx7lv Namespace:default ID:67b1df18aa2868fe8391835ef3db6baf6399fca7d4a3cc2ec9c32d709355d6c6 UID:661b3d8c-f0db-4f13-85dc-d37412da52f9 NetNS:/var/run/netns/3f9b530d-2324-4c8c-a1ec-62ec51d33d97 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008ac78}] Aliases:map[]}"
	Oct 19 16:34:36 functional-507544 crio[3576]: time="2025-10-19T16:34:36.075003496Z" level=info msg="Adding pod default_hello-node-connect-7d85dfc575-cx7lv to CNI network \"kindnet\" (type=ptp)"
	Oct 19 16:34:36 functional-507544 crio[3576]: time="2025-10-19T16:34:36.085471019Z" level=info msg="Got pod network &{Name:hello-node-connect-7d85dfc575-cx7lv Namespace:default ID:67b1df18aa2868fe8391835ef3db6baf6399fca7d4a3cc2ec9c32d709355d6c6 UID:661b3d8c-f0db-4f13-85dc-d37412da52f9 NetNS:/var/run/netns/3f9b530d-2324-4c8c-a1ec-62ec51d33d97 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008ac78}] Aliases:map[]}"
	Oct 19 16:34:36 functional-507544 crio[3576]: time="2025-10-19T16:34:36.085607112Z" level=info msg="Checking pod default_hello-node-connect-7d85dfc575-cx7lv for CNI network kindnet (type=ptp)"
	Oct 19 16:34:36 functional-507544 crio[3576]: time="2025-10-19T16:34:36.086380094Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 19 16:34:36 functional-507544 crio[3576]: time="2025-10-19T16:34:36.087264888Z" level=info msg="Ran pod sandbox 67b1df18aa2868fe8391835ef3db6baf6399fca7d4a3cc2ec9c32d709355d6c6 with infra container: default/hello-node-connect-7d85dfc575-cx7lv/POD" id=806c9f55-7cc0-415b-9904-8dee989353e7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 16:34:40 functional-507544 crio[3576]: time="2025-10-19T16:34:40.899214655Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=22f255be-68d8-46e0-910e-16f8e840cc90 name=/runtime.v1.ImageService/PullImage
	Oct 19 16:34:40 functional-507544 crio[3576]: time="2025-10-19T16:34:40.899953312Z" level=info msg="Pulling image: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=7d6fbd27-9903-42f2-b58c-9f00c03de064 name=/runtime.v1.ImageService/PullImage
	Oct 19 16:34:40 functional-507544 crio[3576]: time="2025-10-19T16:34:40.904607106Z" level=info msg="Trying to access \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Oct 19 16:34:41 functional-507544 crio[3576]: time="2025-10-19T16:34:41.174351781Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=83aaf56a-c369-4b82-991e-e64dd1629614 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:34:41 functional-507544 crio[3576]: time="2025-10-19T16:34:41.17450553Z" level=info msg="Image docker.io/mysql:5.7 not found" id=83aaf56a-c369-4b82-991e-e64dd1629614 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:34:41 functional-507544 crio[3576]: time="2025-10-19T16:34:41.17458908Z" level=info msg="Neither image nor artfiact docker.io/mysql:5.7 found" id=83aaf56a-c369-4b82-991e-e64dd1629614 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:34:53 functional-507544 crio[3576]: time="2025-10-19T16:34:53.148361774Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=56978f58-a4d1-4ea1-b67c-4449acafa71e name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:34:53 functional-507544 crio[3576]: time="2025-10-19T16:34:53.148508163Z" level=info msg="Image docker.io/mysql:5.7 not found" id=56978f58-a4d1-4ea1-b67c-4449acafa71e name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:34:53 functional-507544 crio[3576]: time="2025-10-19T16:34:53.148545882Z" level=info msg="Neither image nor artfiact docker.io/mysql:5.7 found" id=56978f58-a4d1-4ea1-b67c-4449acafa71e name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:35:11 functional-507544 crio[3576]: time="2025-10-19T16:35:11.545830342Z" level=info msg="Trying to access \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	874c2ff14b97f       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   6 minutes ago       Exited              mount-munger              0                   7580ba9d2c75e       busybox-mount                               default
	f909fd2f1f12b       docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e       6 minutes ago       Running             nginx                     0                   0136c3b1d2067       nginx-svc                                   default
	1d5ff7dca36a9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       2                   cb2b6d0554192       storage-provisioner                         kube-system
	bd130c1088bf7       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      6 minutes ago       Running             kube-apiserver            0                   f1c19208fe76e       kube-apiserver-functional-507544            kube-system
	2996708114c16       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      6 minutes ago       Running             kube-scheduler            1                   3a97633bdec36       kube-scheduler-functional-507544            kube-system
	6f9d68db6d3d6       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      6 minutes ago       Running             kube-controller-manager   1                   c167d44548f0f       kube-controller-manager-functional-507544   kube-system
	232b3554efed3       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      6 minutes ago       Running             etcd                      1                   2c3aea9080115       etcd-functional-507544                      kube-system
	b1a6aad7bccb7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Exited              storage-provisioner       1                   cb2b6d0554192       storage-provisioner                         kube-system
	a6ac3c40d3695       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      7 minutes ago       Running             coredns                   1                   3cd7aae4f81be       coredns-66bc5c9577-z4xwl                    kube-system
	d105c28b8985b       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      7 minutes ago       Running             kube-proxy                1                   c4828939376a2       kube-proxy-rwnpm                            kube-system
	10025537dc3ab       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      7 minutes ago       Running             kindnet-cni               1                   e9657439b413e       kindnet-mvc2p                               kube-system
	3cdd80e1fd14a       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      7 minutes ago       Exited              coredns                   0                   3cd7aae4f81be       coredns-66bc5c9577-z4xwl                    kube-system
	c4f6db9ff93af       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      8 minutes ago       Exited              kube-proxy                0                   c4828939376a2       kube-proxy-rwnpm                            kube-system
	8fcdf9dec3441       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      8 minutes ago       Exited              kindnet-cni               0                   e9657439b413e       kindnet-mvc2p                               kube-system
	c992a57ea0446       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      8 minutes ago       Exited              etcd                      0                   2c3aea9080115       etcd-functional-507544                      kube-system
	61c5eb9cc5552       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      8 minutes ago       Exited              kube-scheduler            0                   3a97633bdec36       kube-scheduler-functional-507544            kube-system
	5886e461a7b4a       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      8 minutes ago       Exited              kube-controller-manager   0                   c167d44548f0f       kube-controller-manager-functional-507544   kube-system
	
	
	==> coredns [3cdd80e1fd14ae4035e4531b71f7296d418ddaf9b3f2faf0be415a36ca1d2613] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53096 - 64513 "HINFO IN 5388708912172919926.3799386503118995872. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.101589867s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a6ac3c40d36958a6de862293b761e2d75aeab587854de7d49bfae769f19fb001] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58675 - 8925 "HINFO IN 3256408165590317714.5502113091922773666. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.058726274s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               functional-507544
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-507544
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34
	                    minikube.k8s.io/name=functional-507544
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T16_27_02_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 16:26:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-507544
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 16:35:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 16:35:15 +0000   Sun, 19 Oct 2025 16:26:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 16:35:15 +0000   Sun, 19 Oct 2025 16:26:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 16:35:15 +0000   Sun, 19 Oct 2025 16:26:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 16:35:15 +0000   Sun, 19 Oct 2025 16:27:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-507544
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                8e1900d7-85d2-490a-a8f6-e11dcc838551
	  Boot ID:                    74c6737e-3c93-45ee-b810-4373d2d70b9b
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-mv5h7                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m19s
	  default                     hello-node-connect-7d85dfc575-cx7lv           0 (0%)        0 (0%)      0 (0%)           0 (0%)         62s
	  default                     mysql-5bb876957f-vgwqp                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     6m1s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m16s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  kube-system                 coredns-66bc5c9577-z4xwl                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     8m30s
	  kube-system                 etcd-functional-507544                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8m36s
	  kube-system                 kindnet-mvc2p                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      8m30s
	  kube-system                 kube-apiserver-functional-507544              250m (3%)     0 (0%)      0 (0%)           0 (0%)         6m48s
	  kube-system                 kube-controller-manager-functional-507544     200m (2%)     0 (0%)      0 (0%)           0 (0%)         8m36s
	  kube-system                 kube-proxy-rwnpm                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m30s
	  kube-system                 kube-scheduler-functional-507544              100m (1%)     0 (0%)      0 (0%)           0 (0%)         8m36s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m30s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-v4d72    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m3s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-wqsqt         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m29s                  kube-proxy       
	  Normal  Starting                 6m32s                  kube-proxy       
	  Normal  Starting                 8m41s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m41s (x8 over 8m41s)  kubelet          Node functional-507544 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m41s (x8 over 8m41s)  kubelet          Node functional-507544 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m41s (x8 over 8m41s)  kubelet          Node functional-507544 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    8m36s                  kubelet          Node functional-507544 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  8m36s                  kubelet          Node functional-507544 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     8m36s                  kubelet          Node functional-507544 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m36s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m31s                  node-controller  Node functional-507544 event: Registered Node functional-507544 in Controller
	  Normal  NodeReady                7m49s                  kubelet          Node functional-507544 status is now: NodeReady
	  Normal  Starting                 7m11s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m11s (x8 over 7m11s)  kubelet          Node functional-507544 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m11s (x8 over 7m11s)  kubelet          Node functional-507544 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m11s (x8 over 7m11s)  kubelet          Node functional-507544 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m46s                  node-controller  Node functional-507544 event: Registered Node functional-507544 in Controller
	
	
	==> dmesg <==
	[  +0.102617] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027869] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.458770] kauditd_printk_skb: 47 callbacks suppressed
	[Oct19 16:23] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.054620] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023908] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023878] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023848] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023943] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +2.047764] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +4.031517] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +8.127172] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[ +16.382276] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[Oct19 16:24] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	
	
	==> etcd [232b3554efed3e895da66259cb238b385489998d30cbf9437a17db6348583118] <==
	{"level":"warn","ts":"2025-10-19T16:28:47.138344Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.147387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.156109Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.162655Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.168752Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.174569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.180801Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.187323Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.193536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.201096Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.209599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.216364Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.222822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.230208Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.237492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.250556Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.257478Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.264056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.270509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.277016Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.283231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.295941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.303147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.309595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.356254Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39636","server-name":"","error":"EOF"}
	
	
	==> etcd [c992a57ea0446745a96ba364917b05db213b70a8a91f3769afc9ebce3fdf3850] <==
	{"level":"warn","ts":"2025-10-19T16:26:58.808666Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:26:58.814744Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:26:58.831321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:26:58.834820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:26:58.840818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:26:58.846932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:26:58.890385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52952","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-19T16:28:24.086788Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-19T16:28:24.086876Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-507544","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-19T16:28:24.086990Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-19T16:28:24.088460Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-19T16:28:24.088545Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-19T16:28:24.088577Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"warn","ts":"2025-10-19T16:28:24.088600Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-19T16:28:24.088642Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-19T16:28:24.088649Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-19T16:28:24.088644Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-19T16:28:24.088665Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-19T16:28:24.088664Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-19T16:28:24.088696Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-19T16:28:24.088706Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-19T16:28:24.090695Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-19T16:28:24.090752Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-19T16:28:24.090785Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-19T16:28:24.090794Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-507544","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 16:35:37 up 18 min,  0 user,  load average: 0.09, 0.41, 0.44
	Linux functional-507544 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [10025537dc3abc6d9b1d66732463d3b2b95aa6fe95fe6ce8440ffb8252db820f] <==
	I1019 16:33:34.392902       1 main.go:301] handling current node
	I1019 16:33:44.391111       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:33:44.391148       1 main.go:301] handling current node
	I1019 16:33:54.389731       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:33:54.389767       1 main.go:301] handling current node
	I1019 16:34:04.387704       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:34:04.387757       1 main.go:301] handling current node
	I1019 16:34:14.387729       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:34:14.387771       1 main.go:301] handling current node
	I1019 16:34:24.393012       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:34:24.393049       1 main.go:301] handling current node
	I1019 16:34:34.396520       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:34:34.396562       1 main.go:301] handling current node
	I1019 16:34:44.390017       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:34:44.390083       1 main.go:301] handling current node
	I1019 16:34:54.388045       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:34:54.388126       1 main.go:301] handling current node
	I1019 16:35:04.390900       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:35:04.390938       1 main.go:301] handling current node
	I1019 16:35:14.391044       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:35:14.391110       1 main.go:301] handling current node
	I1019 16:35:24.395458       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:35:24.395501       1 main.go:301] handling current node
	I1019 16:35:34.396592       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:35:34.396630       1 main.go:301] handling current node
	
	
	==> kindnet [8fcdf9dec34411c1d2d1bdbf7f4262661b43f3d0d72fdbb514c24d6552cb6e4f] <==
	I1019 16:27:07.894506       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 16:27:07.894803       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1019 16:27:07.894944       1 main.go:148] setting mtu 1500 for CNI 
	I1019 16:27:07.894962       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 16:27:07.894984       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T16:27:08Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 16:27:08.096111       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 16:27:08.096411       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 16:27:08.096445       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 16:27:08.096616       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1019 16:27:38.097604       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1019 16:27:38.097600       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1019 16:27:38.097611       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1019 16:27:38.097604       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1019 16:27:39.696719       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 16:27:39.696755       1 metrics.go:72] Registering metrics
	I1019 16:27:39.696818       1 controller.go:711] "Syncing nftables rules"
	I1019 16:27:48.104019       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:27:48.104059       1 main.go:301] handling current node
	I1019 16:27:58.100727       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:27:58.100769       1 main.go:301] handling current node
	I1019 16:28:08.100413       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:28:08.100450       1 main.go:301] handling current node
	
	
	==> kube-apiserver [bd130c1088bf7d1c124730d74a847b71c9821cf326a67aad1216cb07a547b96a] <==
	I1019 16:28:47.809715       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1019 16:28:47.811965       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1019 16:28:47.816819       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1019 16:28:47.816846       1 policy_source.go:240] refreshing policies
	I1019 16:28:47.827112       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 16:28:48.710386       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1019 16:28:49.017195       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1019 16:28:49.018467       1 controller.go:667] quota admission added evaluator for: endpoints
	I1019 16:28:49.023654       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 16:28:49.241858       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 16:28:49.241858       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 16:28:49.523461       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1019 16:28:49.620178       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1019 16:28:49.673977       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 16:28:49.680303       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 16:28:51.304578       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1019 16:29:14.725009       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.108.56.157"}
	I1019 16:29:18.524389       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.109.202.206"}
	I1019 16:29:21.388791       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.98.23.83"}
	I1019 16:29:34.345873       1 controller.go:667] quota admission added evaluator for: namespaces
	I1019 16:29:34.466507       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.10.181"}
	I1019 16:29:34.481960       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.204.129"}
	E1019 16:29:34.615398       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:55370: use of closed network connection
	I1019 16:29:36.447894       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.100.19.161"}
	I1019 16:34:35.824119       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.110.88.91"}
	
	
	==> kube-controller-manager [5886e461a7b4a183fac570c337eba31221c4f8d80680f651e35b86312b3b4662] <==
	I1019 16:27:06.269895       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1019 16:27:06.270018       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1019 16:27:06.270100       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1019 16:27:06.270136       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1019 16:27:06.270243       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1019 16:27:06.270449       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1019 16:27:06.270505       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1019 16:27:06.270520       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1019 16:27:06.270493       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1019 16:27:06.270862       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1019 16:27:06.270975       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1019 16:27:06.271112       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1019 16:27:06.272386       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1019 16:27:06.273558       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1019 16:27:06.275820       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 16:27:06.278936       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1019 16:27:06.278986       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1019 16:27:06.279015       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1019 16:27:06.279024       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1019 16:27:06.279028       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1019 16:27:06.279131       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1019 16:27:06.284734       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="functional-507544" podCIDRs=["10.244.0.0/24"]
	I1019 16:27:06.285694       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 16:27:06.285701       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1019 16:27:51.226515       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [6f9d68db6d3d66d4b89c73b933115640996e2ad1584dc4733fec9eb8f8617cee] <==
	I1019 16:28:51.147320       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1019 16:28:51.147536       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1019 16:28:51.147633       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1019 16:28:51.148007       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1019 16:28:51.148025       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1019 16:28:51.149985       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1019 16:28:51.151271       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 16:28:51.152361       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1019 16:28:51.153569       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1019 16:28:51.153598       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1019 16:28:51.153610       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1019 16:28:51.153623       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1019 16:28:51.154771       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 16:28:51.154771       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1019 16:28:51.157958       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1019 16:28:51.160113       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1019 16:28:51.161272       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1019 16:28:51.163450       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1019 16:28:51.168958       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1019 16:29:34.405011       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1019 16:29:34.408954       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1019 16:29:34.412916       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1019 16:29:34.413153       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1019 16:29:34.416170       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1019 16:29:34.421693       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [c4f6db9ff93af54aa47d8330711b3aacaf2d4ed53868d324569d79661c893e86] <==
	I1019 16:27:07.757997       1 server_linux.go:53] "Using iptables proxy"
	I1019 16:27:07.829126       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 16:27:07.929844       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 16:27:07.929903       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1019 16:27:07.930034       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 16:27:07.948180       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 16:27:07.948245       1 server_linux.go:132] "Using iptables Proxier"
	I1019 16:27:07.953226       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 16:27:07.953625       1 server.go:527] "Version info" version="v1.34.1"
	I1019 16:27:07.953644       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 16:27:07.954969       1 config.go:309] "Starting node config controller"
	I1019 16:27:07.954988       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 16:27:07.954996       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 16:27:07.955009       1 config.go:200] "Starting service config controller"
	I1019 16:27:07.955013       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 16:27:07.955024       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 16:27:07.955040       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 16:27:07.955031       1 config.go:106] "Starting endpoint slice config controller"
	I1019 16:27:07.955093       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 16:27:08.055919       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1019 16:27:08.055962       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 16:27:08.055920       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [d105c28b8985b9425bc6dc11577787f84445b24a2501dbbf2c5479621ec7d4c5] <==
	E1019 16:28:14.055235       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-507544&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 16:28:14.925306       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-507544&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 16:28:17.992857       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-507544&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 16:28:21.964850       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-507544&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 16:28:42.435400       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-507544&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1019 16:29:05.054962       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 16:29:05.054999       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1019 16:29:05.055094       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 16:29:05.075205       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 16:29:05.075262       1 server_linux.go:132] "Using iptables Proxier"
	I1019 16:29:05.080962       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 16:29:05.081386       1 server.go:527] "Version info" version="v1.34.1"
	I1019 16:29:05.081402       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 16:29:05.082685       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 16:29:05.082707       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 16:29:05.082716       1 config.go:200] "Starting service config controller"
	I1019 16:29:05.082742       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 16:29:05.082836       1 config.go:106] "Starting endpoint slice config controller"
	I1019 16:29:05.082898       1 config.go:309] "Starting node config controller"
	I1019 16:29:05.082907       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 16:29:05.082920       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 16:29:05.083625       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 16:29:05.182940       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 16:29:05.183531       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 16:29:05.184724       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2996708114c16d776a6aff6f9fcc6dad7f1fef587753492e0a7a7981480bcf7c] <==
	I1019 16:28:46.382085       1 serving.go:386] Generated self-signed cert in-memory
	W1019 16:28:47.731959       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1019 16:28:47.732000       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1019 16:28:47.732014       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1019 16:28:47.732024       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1019 16:28:47.750471       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1019 16:28:47.750506       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 16:28:47.752862       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 16:28:47.752902       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 16:28:47.753175       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1019 16:28:47.753232       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1019 16:28:47.853949       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [61c5eb9cc5552cf24b68447b4ed0fbb9972f27fc884505108656b85504cc2ff2] <==
	E1019 16:26:59.303300       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1019 16:26:59.303338       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1019 16:26:59.303374       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1019 16:26:59.303445       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1019 16:26:59.303447       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1019 16:26:59.303520       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1019 16:26:59.303537       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1019 16:27:00.157669       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 16:27:00.187109       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1019 16:27:00.188168       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1019 16:27:00.235871       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1019 16:27:00.343078       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1019 16:27:00.351277       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1019 16:27:00.373492       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1019 16:27:00.375580       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1019 16:27:00.470255       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1019 16:27:00.471050       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1019 16:27:00.517324       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1019 16:27:00.899509       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 16:28:24.197112       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 16:28:24.197145       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1019 16:28:24.197196       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1019 16:28:24.197223       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1019 16:28:24.197236       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1019 16:28:24.197268       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 19 16:30:36 functional-507544 kubelet[4118]: E1019 16:30:36.065495    4118 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-v4
d72" podUID="f72c10dc-e8cd-4faf-99bc-7c5d642cadce"
	Oct 19 16:30:36 functional-507544 kubelet[4118]: E1019 16:30:36.540475    4118 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests
: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-v4d72" podUID="f72c10dc-e8cd-4faf-99bc-7c5d642cadce"
	Oct 19 16:32:07 functional-507544 kubelet[4118]: E1019 16:32:07.851597    4118 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe
7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 19 16:32:07 functional-507544 kubelet[4118]: E1019 16:32:07.851680    4118 kuberuntime_image.go:43] "Failed to pull image" err="unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef
93"
	Oct 19 16:32:07 functional-507544 kubelet[4118]: E1019 16:32:07.851928    4118 kuberuntime_manager.go:1449] "Unhandled Error" err="container kubernetes-dashboard start failed in pod kubernetes-dashboard-855c9754f9-wqsqt_kubernetes-dashboard(cdb50ed8-0281-40b9-b352-190f2baaf640): ErrImagePull: unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit
. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 19 16:32:07 functional-507544 kubelet[4118]: E1019 16:32:07.851997    4118 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubern
etes-dashboard-855c9754f9-wqsqt" podUID="cdb50ed8-0281-40b9-b352-190f2baaf640"
	Oct 19 16:32:08 functional-507544 kubelet[4118]: E1019 16:32:08.793415    4118 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui
/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-wqsqt" podUID="cdb50ed8-0281-40b9-b352-190f2baaf640"
	Oct 19 16:33:09 functional-507544 kubelet[4118]: E1019 16:33:09.151615    4118 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 19 16:33:09 functional-507544 kubelet[4118]: E1019 16:33:09.151683    4118 kuberuntime_image.go:43] "Failed to pull image" err="unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 19 16:33:09 functional-507544 kubelet[4118]: E1019 16:33:09.151916    4118 kuberuntime_manager.go:1449] "Unhandled Error" err="container myfrontend start failed in pod sp-pod_default(f1eb7495-9064-4ffe-979e-857122179f13): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 19 16:33:09 functional-507544 kubelet[4118]: E1019 16:33:09.151968    4118 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="f1eb7495-9064-4ffe-979e-857122179f13"
	Oct 19 16:33:09 functional-507544 kubelet[4118]: E1019 16:33:09.942414    4118 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="f1eb7495-9064-4ffe-979e-857122179f13"
	Oct 19 16:34:35 functional-507544 kubelet[4118]: I1019 16:34:35.938954    4118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqmbb\" (UniqueName: \"kubernetes.io/projected/661b3d8c-f0db-4f13-85dc-d37412da52f9-kube-api-access-qqmbb\") pod \"hello-node-connect-7d85dfc575-cx7lv\" (UID: \"661b3d8c-f0db-4f13-85dc-d37412da52f9\") " pod="default/hello-node-connect-7d85dfc575-cx7lv"
	Oct 19 16:34:40 functional-507544 kubelet[4118]: E1019 16:34:40.898715    4118 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://mysql:5.7: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Oct 19 16:34:40 functional-507544 kubelet[4118]: E1019 16:34:40.898777    4118 kuberuntime_image.go:43] "Failed to pull image" err="unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://mysql:5.7: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Oct 19 16:34:40 functional-507544 kubelet[4118]: E1019 16:34:40.899008    4118 kuberuntime_manager.go:1449] "Unhandled Error" err="container mysql start failed in pod mysql-5bb876957f-vgwqp_default(2a109cd2-82cc-4066-94eb-c9d0775cb362): ErrImagePull: unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://mysql:5.7: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 19 16:34:40 functional-507544 kubelet[4118]: E1019 16:34:40.899104    4118 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://mysql:5.7: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-vgwqp" podUID="2a109cd2-82cc-4066-94eb-c9d0775cb362"
	Oct 19 16:34:40 functional-507544 kubelet[4118]: E1019 16:34:40.899567    4118 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list" image="kicbase/echo-server:latest"
	Oct 19 16:34:40 functional-507544 kubelet[4118]: E1019 16:34:40.899607    4118 kuberuntime_image.go:43] "Failed to pull image" err="short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list" image="kicbase/echo-server:latest"
	Oct 19 16:34:40 functional-507544 kubelet[4118]: E1019 16:34:40.899779    4118 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-75c85bcc94-mv5h7_default(e68c1276-cc7a-4036-91b9-ba15632cc2bf): ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list" logger="UnhandledError"
	Oct 19 16:34:40 functional-507544 kubelet[4118]: E1019 16:34:40.901081    4118 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-mv5h7" podUID="e68c1276-cc7a-4036-91b9-ba15632cc2bf"
	Oct 19 16:34:41 functional-507544 kubelet[4118]: E1019 16:34:41.174893    4118 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://mysql:5.7: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-vgwqp" podUID="2a109cd2-82cc-4066-94eb-c9d0775cb362"
	Oct 19 16:34:52 functional-507544 kubelet[4118]: E1019 16:34:52.148503    4118 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-mv5h7" podUID="e68c1276-cc7a-4036-91b9-ba15632cc2bf"
	Oct 19 16:35:07 functional-507544 kubelet[4118]: E1019 16:35:07.148094    4118 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-mv5h7" podUID="e68c1276-cc7a-4036-91b9-ba15632cc2bf"
	Oct 19 16:35:19 functional-507544 kubelet[4118]: E1019 16:35:19.147711    4118 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-mv5h7" podUID="e68c1276-cc7a-4036-91b9-ba15632cc2bf"
	
	
	==> storage-provisioner [1d5ff7dca36a908853bf80f1bdc7a8189fcd31580bdeff4098c2276ae5f95801] <==
	W1019 16:35:12.373418       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:35:14.376626       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:35:14.381767       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:35:16.384972       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:35:16.388969       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:35:18.392188       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:35:18.396335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:35:20.399089       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:35:20.403582       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:35:22.406494       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:35:22.411991       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:35:24.415624       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:35:24.419673       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:35:26.423479       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:35:26.428699       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:35:28.431769       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:35:28.435892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:35:30.439404       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:35:30.443391       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:35:32.447011       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:35:32.451324       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:35:34.454559       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:35:34.458661       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:35:36.461705       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:35:36.466232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [b1a6aad7bccb7a4da8fd921d34e59bdef668c795f652e7e54ace1bc2adf761a6] <==
	I1019 16:28:13.955241       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1019 16:28:13.956728       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-507544 -n functional-507544
helpers_test.go:269: (dbg) Run:  kubectl --context functional-507544 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-mv5h7 hello-node-connect-7d85dfc575-cx7lv mysql-5bb876957f-vgwqp sp-pod dashboard-metrics-scraper-77bf4d6c4c-v4d72 kubernetes-dashboard-855c9754f9-wqsqt
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-507544 describe pod busybox-mount hello-node-75c85bcc94-mv5h7 hello-node-connect-7d85dfc575-cx7lv mysql-5bb876957f-vgwqp sp-pod dashboard-metrics-scraper-77bf4d6c4c-v4d72 kubernetes-dashboard-855c9754f9-wqsqt
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-507544 describe pod busybox-mount hello-node-75c85bcc94-mv5h7 hello-node-connect-7d85dfc575-cx7lv mysql-5bb876957f-vgwqp sp-pod dashboard-metrics-scraper-77bf4d6c4c-v4d72 kubernetes-dashboard-855c9754f9-wqsqt: exit status 1 (102.183987ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-507544/192.168.49.2
	Start Time:       Sun, 19 Oct 2025 16:29:21 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.6
	IPs:
	  IP:  10.244.0.6
	Containers:
	  mount-munger:
	    Container ID:  cri-o://874c2ff14b97f5564fd2f8b7ea851875753a6ff2aca767743f234378aa18a8cf
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sun, 19 Oct 2025 16:29:24 +0000
	      Finished:     Sun, 19 Oct 2025 16:29:24 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pxtmk (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-pxtmk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  6m16s  default-scheduler  Successfully assigned default/busybox-mount to functional-507544
	  Normal  Pulling    6m17s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     6m14s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 846ms (2.459s including waiting). Image size: 4631262 bytes.
	  Normal  Created    6m14s  kubelet            Created container: mount-munger
	  Normal  Started    6m14s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-mv5h7
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-507544/192.168.49.2
	Start Time:       Sun, 19 Oct 2025 16:29:18 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n469f (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-n469f:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m19s                default-scheduler  Successfully assigned default/hello-node-75c85bcc94-mv5h7 to functional-507544
	  Warning  Failed     58s (x3 over 6m20s)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     58s (x3 over 6m20s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    19s (x5 over 6m19s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     19s (x5 over 6m19s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    7s (x4 over 6m20s)   kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-cx7lv
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-507544/192.168.49.2
	Start Time:       Sun, 19 Oct 2025 16:34:35 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qqmbb (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-qqmbb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  62s   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-cx7lv to functional-507544
	  Normal  Pulling    62s   kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             mysql-5bb876957f-vgwqp
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-507544/192.168.49.2
	Start Time:       Sun, 19 Oct 2025 16:29:36 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:           10.244.0.11
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s558z (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-s558z:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  6m1s                default-scheduler  Successfully assigned default/mysql-5bb876957f-vgwqp to functional-507544
	  Warning  Failed     58s                 kubelet            Failed to pull image "docker.io/mysql:5.7": unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://mysql:5.7: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     58s                 kubelet            Error: ErrImagePull
	  Normal   BackOff    57s                 kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     57s                 kubelet            Error: ImagePullBackOff
	  Normal   Pulling    45s (x2 over 6m2s)  kubelet            Pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-507544/192.168.49.2
	Start Time:       Sun, 19 Oct 2025 16:29:35 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q8m6v (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-q8m6v:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m2s                  default-scheduler  Successfully assigned default/sp-pod to functional-507544
	  Warning  Failed     2m29s                 kubelet            Failed to pull image "docker.io/nginx": unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m29s                 kubelet            Error: ErrImagePull
	  Normal   BackOff    2m29s                 kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     2m29s                 kubelet            Error: ImagePullBackOff
	  Normal   Pulling    2m16s (x2 over 6m3s)  kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-v4d72" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-wqsqt" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-507544 describe pod busybox-mount hello-node-75c85bcc94-mv5h7 hello-node-connect-7d85dfc575-cx7lv mysql-5bb876957f-vgwqp sp-pod dashboard-metrics-scraper-77bf4d6c4c-v4d72 kubernetes-dashboard-855c9754f9-wqsqt: exit status 1
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (377.99s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (602.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-507544 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-vgwqp" [2a109cd2-82cc-4066-94eb-c9d0775cb362] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
E1019 16:30:32.963106    7228 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:32:49.094950    7228 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:33:16.804732    7228 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/MySQL: WARNING: pod list for "default" "app=mysql" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1804: ***** TestFunctional/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1804: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-507544 -n functional-507544
functional_test.go:1804: TestFunctional/parallel/MySQL: showing logs for failed pods as of 2025-10-19 16:39:36.801923072 +0000 UTC m=+1161.857189370
functional_test.go:1804: (dbg) Run:  kubectl --context functional-507544 describe po mysql-5bb876957f-vgwqp -n default
functional_test.go:1804: (dbg) kubectl --context functional-507544 describe po mysql-5bb876957f-vgwqp -n default:
Name:             mysql-5bb876957f-vgwqp
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-507544/192.168.49.2
Start Time:       Sun, 19 Oct 2025 16:29:36 +0000
Labels:           app=mysql
pod-template-hash=5bb876957f
Annotations:      <none>
Status:           Pending
IP:               10.244.0.11
IPs:
IP:           10.244.0.11
Controlled By:  ReplicaSet/mysql-5bb876957f
Containers:
mysql:
Container ID:   
Image:          docker.io/mysql:5.7
Image ID:       
Port:           3306/TCP (mysql)
Host Port:      0/TCP (mysql)
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s558z (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-s558z:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/mysql-5bb876957f-vgwqp to functional-507544
Warning  Failed     4m56s                kubelet            Failed to pull image "docker.io/mysql:5.7": unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://mysql:5.7: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     4m56s                kubelet            Error: ErrImagePull
Normal   BackOff    4m55s                kubelet            Back-off pulling image "docker.io/mysql:5.7"
Warning  Failed     4m55s                kubelet            Error: ImagePullBackOff
Normal   Pulling    4m43s (x2 over 10m)  kubelet            Pulling image "docker.io/mysql:5.7"
functional_test.go:1804: (dbg) Run:  kubectl --context functional-507544 logs mysql-5bb876957f-vgwqp -n default
functional_test.go:1804: (dbg) Non-zero exit: kubectl --context functional-507544 logs mysql-5bb876957f-vgwqp -n default: exit status 1 (65.88078ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mysql" in pod "mysql-5bb876957f-vgwqp" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1804: kubectl --context functional-507544 logs mysql-5bb876957f-vgwqp -n default: exit status 1
functional_test.go:1806: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/MySQL]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/MySQL]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-507544
helpers_test.go:243: (dbg) docker inspect functional-507544:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "add43b7ec9e00c44964e724714f8e6fd86a5a7f5ea20fd7752ea76e8ee9a7112",
	        "Created": "2025-10-19T16:26:45.122198852Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 31618,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T16:26:45.156822472Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/add43b7ec9e00c44964e724714f8e6fd86a5a7f5ea20fd7752ea76e8ee9a7112/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/add43b7ec9e00c44964e724714f8e6fd86a5a7f5ea20fd7752ea76e8ee9a7112/hostname",
	        "HostsPath": "/var/lib/docker/containers/add43b7ec9e00c44964e724714f8e6fd86a5a7f5ea20fd7752ea76e8ee9a7112/hosts",
	        "LogPath": "/var/lib/docker/containers/add43b7ec9e00c44964e724714f8e6fd86a5a7f5ea20fd7752ea76e8ee9a7112/add43b7ec9e00c44964e724714f8e6fd86a5a7f5ea20fd7752ea76e8ee9a7112-json.log",
	        "Name": "/functional-507544",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-507544:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-507544",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "add43b7ec9e00c44964e724714f8e6fd86a5a7f5ea20fd7752ea76e8ee9a7112",
	                "LowerDir": "/var/lib/docker/overlay2/8980f1af902bf50d500cfed38df55e903ab2a49d3d08bd2b7c474ec81388d736-init/diff:/var/lib/docker/overlay2/0516a6de822f96a429e000bb6523437ca93a66a6aecaba561c6abf45d0d1defe/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8980f1af902bf50d500cfed38df55e903ab2a49d3d08bd2b7c474ec81388d736/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8980f1af902bf50d500cfed38df55e903ab2a49d3d08bd2b7c474ec81388d736/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8980f1af902bf50d500cfed38df55e903ab2a49d3d08bd2b7c474ec81388d736/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-507544",
	                "Source": "/var/lib/docker/volumes/functional-507544/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-507544",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-507544",
	                "name.minikube.sigs.k8s.io": "functional-507544",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "08d99069f25412728f0de1767e870ab3d5e37b37c9be1e53bb041c372f124f33",
	            "SandboxKey": "/var/run/docker/netns/08d99069f254",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-507544": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9e:97:3c:cd:99:b6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "190b1e4dbc4b84704bafdf33f14b0c728242ffe12133f3a7d8f637228926fb2b",
	                    "EndpointID": "9fefd75494de70091fe4fdf69631663c727e7236d9987ed273d484d28bb5b3f9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-507544",
	                        "add43b7ec9e0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-507544 -n functional-507544
helpers_test.go:252: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-507544 logs -n 25: (1.300032871s)
helpers_test.go:260: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-507544 ssh sudo cat /etc/test/nested/copy/7228/hosts                                                                                                 │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ image          │ functional-507544 image ls                                                                                                                                      │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ dashboard      │ --url --port 36195 -p functional-507544 --alsologtostderr -v=1                                                                                                  │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │                     │
	│ image          │ functional-507544 image load --daemon kicbase/echo-server:functional-507544 --alsologtostderr                                                                   │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ image          │ functional-507544 image ls                                                                                                                                      │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ image          │ functional-507544 image save kicbase/echo-server:functional-507544 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ image          │ functional-507544 image rm kicbase/echo-server:functional-507544 --alsologtostderr                                                                              │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ image          │ functional-507544 image ls                                                                                                                                      │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ image          │ functional-507544 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ image          │ functional-507544 image save --daemon kicbase/echo-server:functional-507544 --alsologtostderr                                                                   │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ update-context │ functional-507544 update-context --alsologtostderr -v=2                                                                                                         │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:35 UTC │ 19 Oct 25 16:35 UTC │
	│ update-context │ functional-507544 update-context --alsologtostderr -v=2                                                                                                         │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:35 UTC │ 19 Oct 25 16:35 UTC │
	│ update-context │ functional-507544 update-context --alsologtostderr -v=2                                                                                                         │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:35 UTC │ 19 Oct 25 16:35 UTC │
	│ image          │ functional-507544 image ls --format short --alsologtostderr                                                                                                     │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:35 UTC │ 19 Oct 25 16:35 UTC │
	│ image          │ functional-507544 image ls --format json --alsologtostderr                                                                                                      │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:35 UTC │ 19 Oct 25 16:35 UTC │
	│ image          │ functional-507544 image ls --format table --alsologtostderr                                                                                                     │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:35 UTC │ 19 Oct 25 16:35 UTC │
	│ image          │ functional-507544 image ls --format yaml --alsologtostderr                                                                                                      │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:35 UTC │ 19 Oct 25 16:35 UTC │
	│ ssh            │ functional-507544 ssh pgrep buildkitd                                                                                                                           │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:35 UTC │                     │
	│ image          │ functional-507544 image build -t localhost/my-image:functional-507544 testdata/build --alsologtostderr                                                          │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:35 UTC │ 19 Oct 25 16:35 UTC │
	│ image          │ functional-507544 image ls                                                                                                                                      │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:35 UTC │ 19 Oct 25 16:35 UTC │
	│ service        │ functional-507544 service list                                                                                                                                  │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:39 UTC │ 19 Oct 25 16:39 UTC │
	│ service        │ functional-507544 service list -o json                                                                                                                          │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:39 UTC │ 19 Oct 25 16:39 UTC │
	│ service        │ functional-507544 service --namespace=default --https --url hello-node                                                                                          │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:39 UTC │                     │
	│ service        │ functional-507544 service hello-node --url --format={{.IP}}                                                                                                     │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:39 UTC │                     │
	│ service        │ functional-507544 service hello-node --url                                                                                                                      │ functional-507544 │ jenkins │ v1.37.0 │ 19 Oct 25 16:39 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 16:29:30
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 16:29:30.118385   44149 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:29:30.118477   44149 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:29:30.118484   44149 out.go:374] Setting ErrFile to fd 2...
	I1019 16:29:30.118488   44149 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:29:30.118804   44149 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3731/.minikube/bin
	I1019 16:29:30.119280   44149 out.go:368] Setting JSON to false
	I1019 16:29:30.120292   44149 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":716,"bootTime":1760890654,"procs":246,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 16:29:30.120370   44149 start.go:143] virtualization: kvm guest
	I1019 16:29:30.122096   44149 out.go:179] * [functional-507544] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1019 16:29:30.123412   44149 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 16:29:30.123412   44149 notify.go:221] Checking for updates...
	I1019 16:29:30.124663   44149 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 16:29:30.125918   44149 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-3731/kubeconfig
	I1019 16:29:30.127440   44149 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-3731/.minikube
	I1019 16:29:30.128697   44149 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 16:29:30.130309   44149 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 16:29:30.131905   44149 config.go:182] Loaded profile config "functional-507544": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:29:30.132380   44149 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 16:29:30.156613   44149 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1019 16:29:30.156707   44149 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 16:29:30.214136   44149 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-19 16:29:30.20345749 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 16:29:30.214295   44149 docker.go:319] overlay module found
	I1019 16:29:30.216279   44149 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1019 16:29:30.217643   44149 start.go:309] selected driver: docker
	I1019 16:29:30.217661   44149 start.go:930] validating driver "docker" against &{Name:functional-507544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-507544 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 16:29:30.217741   44149 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 16:29:30.219580   44149 out.go:203] 
	W1019 16:29:30.220824   44149 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1019 16:29:30.221973   44149 out.go:203] 
	
	
	==> CRI-O <==
	Oct 19 16:34:53 functional-507544 crio[3576]: time="2025-10-19T16:34:53.148508163Z" level=info msg="Image docker.io/mysql:5.7 not found" id=56978f58-a4d1-4ea1-b67c-4449acafa71e name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:34:53 functional-507544 crio[3576]: time="2025-10-19T16:34:53.148545882Z" level=info msg="Neither image nor artfiact docker.io/mysql:5.7 found" id=56978f58-a4d1-4ea1-b67c-4449acafa71e name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:35:11 functional-507544 crio[3576]: time="2025-10-19T16:35:11.545830342Z" level=info msg="Trying to access \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Oct 19 16:35:42 functional-507544 crio[3576]: time="2025-10-19T16:35:42.191361847Z" level=info msg="Pulling image: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=bc2c1255-710a-4371-9900-24b918d1d11f name=/runtime.v1.ImageService/PullImage
	Oct 19 16:35:42 functional-507544 crio[3576]: time="2025-10-19T16:35:42.20716997Z" level=info msg="Trying to access \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Oct 19 16:35:55 functional-507544 crio[3576]: time="2025-10-19T16:35:55.148669562Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=ad8e6518-bf9a-4c28-b2ba-269f86f83de2 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:35:55 functional-507544 crio[3576]: time="2025-10-19T16:35:55.148883386Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=ad8e6518-bf9a-4c28-b2ba-269f86f83de2 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:35:55 functional-507544 crio[3576]: time="2025-10-19T16:35:55.148926779Z" level=info msg="Neither image nor artfiact docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c found" id=ad8e6518-bf9a-4c28-b2ba-269f86f83de2 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:36:10 functional-507544 crio[3576]: time="2025-10-19T16:36:10.14876473Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=db0e78f0-b200-4a4c-813f-afd0794541e5 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:36:10 functional-507544 crio[3576]: time="2025-10-19T16:36:10.149006517Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=db0e78f0-b200-4a4c-813f-afd0794541e5 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:36:10 functional-507544 crio[3576]: time="2025-10-19T16:36:10.149086971Z" level=info msg="Neither image nor artfiact docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c found" id=db0e78f0-b200-4a4c-813f-afd0794541e5 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:36:43 functional-507544 crio[3576]: time="2025-10-19T16:36:43.291866764Z" level=info msg="Trying to access \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Oct 19 16:37:13 functional-507544 crio[3576]: time="2025-10-19T16:37:13.936584247Z" level=info msg="Pulling image: docker.io/nginx:latest" id=f835566b-bfd5-47e5-ad01-9819e19bc07b name=/runtime.v1.ImageService/PullImage
	Oct 19 16:37:13 functional-507544 crio[3576]: time="2025-10-19T16:37:13.953368813Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Oct 19 16:37:26 functional-507544 crio[3576]: time="2025-10-19T16:37:26.14840163Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=a2451b97-3ef3-44b7-aaa3-b7a217deccee name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:37:26 functional-507544 crio[3576]: time="2025-10-19T16:37:26.14863053Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=a2451b97-3ef3-44b7-aaa3-b7a217deccee name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:37:26 functional-507544 crio[3576]: time="2025-10-19T16:37:26.14869257Z" level=info msg="Neither image nor artfiact docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 found" id=a2451b97-3ef3-44b7-aaa3-b7a217deccee name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:37:39 functional-507544 crio[3576]: time="2025-10-19T16:37:39.148818287Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=cdb0ee41-b488-4107-aa52-e97e0e869c3d name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:37:39 functional-507544 crio[3576]: time="2025-10-19T16:37:39.149053797Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=cdb0ee41-b488-4107-aa52-e97e0e869c3d name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:37:39 functional-507544 crio[3576]: time="2025-10-19T16:37:39.149108118Z" level=info msg="Neither image nor artfiact docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 found" id=cdb0ee41-b488-4107-aa52-e97e0e869c3d name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:37:44 functional-507544 crio[3576]: time="2025-10-19T16:37:44.599206448Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Oct 19 16:38:15 functional-507544 crio[3576]: time="2025-10-19T16:38:15.258577995Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=417107ba-5364-4f4d-87df-605127e7bf61 name=/runtime.v1.ImageService/PullImage
	Oct 19 16:38:15 functional-507544 crio[3576]: time="2025-10-19T16:38:15.259412164Z" level=info msg="Pulling image: docker.io/mysql:5.7" id=a3740cef-adec-4625-921d-97e909c7302f name=/runtime.v1.ImageService/PullImage
	Oct 19 16:38:15 functional-507544 crio[3576]: time="2025-10-19T16:38:15.263721582Z" level=info msg="Trying to access \"docker.io/library/mysql:5.7\""
	Oct 19 16:39:16 functional-507544 crio[3576]: time="2025-10-19T16:39:16.349415209Z" level=info msg="Trying to access \"docker.io/library/mysql:5.7\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	874c2ff14b97f       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   10 minutes ago      Exited              mount-munger              0                   7580ba9d2c75e       busybox-mount                               default
	f909fd2f1f12b       docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e       10 minutes ago      Running             nginx                     0                   0136c3b1d2067       nginx-svc                                   default
	1d5ff7dca36a9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Running             storage-provisioner       2                   cb2b6d0554192       storage-provisioner                         kube-system
	bd130c1088bf7       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      10 minutes ago      Running             kube-apiserver            0                   f1c19208fe76e       kube-apiserver-functional-507544            kube-system
	2996708114c16       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      10 minutes ago      Running             kube-scheduler            1                   3a97633bdec36       kube-scheduler-functional-507544            kube-system
	6f9d68db6d3d6       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      10 minutes ago      Running             kube-controller-manager   1                   c167d44548f0f       kube-controller-manager-functional-507544   kube-system
	232b3554efed3       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      10 minutes ago      Running             etcd                      1                   2c3aea9080115       etcd-functional-507544                      kube-system
	b1a6aad7bccb7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 minutes ago      Exited              storage-provisioner       1                   cb2b6d0554192       storage-provisioner                         kube-system
	a6ac3c40d3695       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 minutes ago      Running             coredns                   1                   3cd7aae4f81be       coredns-66bc5c9577-z4xwl                    kube-system
	d105c28b8985b       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      11 minutes ago      Running             kube-proxy                1                   c4828939376a2       kube-proxy-rwnpm                            kube-system
	10025537dc3ab       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      11 minutes ago      Running             kindnet-cni               1                   e9657439b413e       kindnet-mvc2p                               kube-system
	3cdd80e1fd14a       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 minutes ago      Exited              coredns                   0                   3cd7aae4f81be       coredns-66bc5c9577-z4xwl                    kube-system
	c4f6db9ff93af       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      12 minutes ago      Exited              kube-proxy                0                   c4828939376a2       kube-proxy-rwnpm                            kube-system
	8fcdf9dec3441       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      12 minutes ago      Exited              kindnet-cni               0                   e9657439b413e       kindnet-mvc2p                               kube-system
	c992a57ea0446       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      12 minutes ago      Exited              etcd                      0                   2c3aea9080115       etcd-functional-507544                      kube-system
	61c5eb9cc5552       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      12 minutes ago      Exited              kube-scheduler            0                   3a97633bdec36       kube-scheduler-functional-507544            kube-system
	5886e461a7b4a       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      12 minutes ago      Exited              kube-controller-manager   0                   c167d44548f0f       kube-controller-manager-functional-507544   kube-system
	
	
	==> coredns [3cdd80e1fd14ae4035e4531b71f7296d418ddaf9b3f2faf0be415a36ca1d2613] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53096 - 64513 "HINFO IN 5388708912172919926.3799386503118995872. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.101589867s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a6ac3c40d36958a6de862293b761e2d75aeab587854de7d49bfae769f19fb001] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58675 - 8925 "HINFO IN 3256408165590317714.5502113091922773666. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.058726274s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               functional-507544
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-507544
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34
	                    minikube.k8s.io/name=functional-507544
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T16_27_02_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 16:26:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-507544
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 16:39:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 16:38:59 +0000   Sun, 19 Oct 2025 16:26:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 16:38:59 +0000   Sun, 19 Oct 2025 16:26:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 16:38:59 +0000   Sun, 19 Oct 2025 16:26:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 16:38:59 +0000   Sun, 19 Oct 2025 16:27:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-507544
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                8e1900d7-85d2-490a-a8f6-e11dcc838551
	  Boot ID:                    74c6737e-3c93-45ee-b810-4373d2d70b9b
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-mv5h7                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-cx7lv           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m3s
	  default                     mysql-5bb876957f-vgwqp                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-z4xwl                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-functional-507544                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kindnet-mvc2p                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-507544              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-507544     200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-rwnpm                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-507544              100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-v4d72    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-wqsqt         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-507544 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-507544 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x8 over 12m)  kubelet          Node functional-507544 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node functional-507544 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node functional-507544 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     12m                kubelet          Node functional-507544 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           12m                node-controller  Node functional-507544 event: Registered Node functional-507544 in Controller
	  Normal  NodeReady                11m                kubelet          Node functional-507544 status is now: NodeReady
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-507544 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-507544 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x8 over 11m)  kubelet          Node functional-507544 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node functional-507544 event: Registered Node functional-507544 in Controller
	
	
	==> dmesg <==
	[  +0.102617] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027869] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.458770] kauditd_printk_skb: 47 callbacks suppressed
	[Oct19 16:23] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.054620] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023908] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023878] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023848] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023943] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +2.047764] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +4.031517] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +8.127172] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[ +16.382276] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[Oct19 16:24] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	
	
	==> etcd [232b3554efed3e895da66259cb238b385489998d30cbf9437a17db6348583118] <==
	{"level":"warn","ts":"2025-10-19T16:28:47.162655Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.168752Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.174569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.180801Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.187323Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.193536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.201096Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.209599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.216364Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.222822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.230208Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.237492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.250556Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.257478Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.264056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.270509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.277016Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.283231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.295941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.303147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.309595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:47.356254Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39636","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-19T16:38:46.878301Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1030}
	{"level":"info","ts":"2025-10-19T16:38:46.898623Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1030,"took":"19.934294ms","hash":3149245820,"current-db-size-bytes":3330048,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":1458176,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2025-10-19T16:38:46.898684Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3149245820,"revision":1030,"compact-revision":-1}
	
	
	==> etcd [c992a57ea0446745a96ba364917b05db213b70a8a91f3769afc9ebce3fdf3850] <==
	{"level":"warn","ts":"2025-10-19T16:26:58.808666Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:26:58.814744Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:26:58.831321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:26:58.834820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:26:58.840818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:26:58.846932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:26:58.890385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52952","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-19T16:28:24.086788Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-19T16:28:24.086876Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-507544","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-19T16:28:24.086990Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-19T16:28:24.088460Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-19T16:28:24.088545Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-19T16:28:24.088577Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"warn","ts":"2025-10-19T16:28:24.088600Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-19T16:28:24.088642Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-19T16:28:24.088649Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-19T16:28:24.088644Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-19T16:28:24.088665Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-19T16:28:24.088664Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-19T16:28:24.088696Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-19T16:28:24.088706Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-19T16:28:24.090695Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-19T16:28:24.090752Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-19T16:28:24.090785Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-19T16:28:24.090794Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-507544","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 16:39:38 up 22 min,  0 user,  load average: 0.11, 0.22, 0.35
	Linux functional-507544 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [10025537dc3abc6d9b1d66732463d3b2b95aa6fe95fe6ce8440ffb8252db820f] <==
	I1019 16:37:34.388035       1 main.go:301] handling current node
	I1019 16:37:44.390821       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:37:44.390864       1 main.go:301] handling current node
	I1019 16:37:54.396995       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:37:54.397032       1 main.go:301] handling current node
	I1019 16:38:04.389807       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:38:04.389859       1 main.go:301] handling current node
	I1019 16:38:14.387644       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:38:14.387677       1 main.go:301] handling current node
	I1019 16:38:24.395161       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:38:24.395202       1 main.go:301] handling current node
	I1019 16:38:34.392637       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:38:34.392681       1 main.go:301] handling current node
	I1019 16:38:44.391053       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:38:44.391117       1 main.go:301] handling current node
	I1019 16:38:54.388475       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:38:54.388533       1 main.go:301] handling current node
	I1019 16:39:04.387702       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:39:04.387740       1 main.go:301] handling current node
	I1019 16:39:14.391091       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:39:14.391149       1 main.go:301] handling current node
	I1019 16:39:24.392977       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:39:24.393020       1 main.go:301] handling current node
	I1019 16:39:34.397193       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:39:34.397238       1 main.go:301] handling current node
	
	
	==> kindnet [8fcdf9dec34411c1d2d1bdbf7f4262661b43f3d0d72fdbb514c24d6552cb6e4f] <==
	I1019 16:27:07.894506       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 16:27:07.894803       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1019 16:27:07.894944       1 main.go:148] setting mtu 1500 for CNI 
	I1019 16:27:07.894962       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 16:27:07.894984       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T16:27:08Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 16:27:08.096111       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 16:27:08.096411       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 16:27:08.096445       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 16:27:08.096616       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1019 16:27:38.097604       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1019 16:27:38.097600       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1019 16:27:38.097611       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1019 16:27:38.097604       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1019 16:27:39.696719       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 16:27:39.696755       1 metrics.go:72] Registering metrics
	I1019 16:27:39.696818       1 controller.go:711] "Syncing nftables rules"
	I1019 16:27:48.104019       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:27:48.104059       1 main.go:301] handling current node
	I1019 16:27:58.100727       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:27:58.100769       1 main.go:301] handling current node
	I1019 16:28:08.100413       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:28:08.100450       1 main.go:301] handling current node
	
	
	==> kube-apiserver [bd130c1088bf7d1c124730d74a847b71c9821cf326a67aad1216cb07a547b96a] <==
	I1019 16:28:47.811965       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1019 16:28:47.816819       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1019 16:28:47.816846       1 policy_source.go:240] refreshing policies
	I1019 16:28:47.827112       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 16:28:48.710386       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1019 16:28:49.017195       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1019 16:28:49.018467       1 controller.go:667] quota admission added evaluator for: endpoints
	I1019 16:28:49.023654       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 16:28:49.241858       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 16:28:49.241858       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 16:28:49.523461       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1019 16:28:49.620178       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1019 16:28:49.673977       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 16:28:49.680303       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 16:28:51.304578       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1019 16:29:14.725009       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.108.56.157"}
	I1019 16:29:18.524389       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.109.202.206"}
	I1019 16:29:21.388791       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.98.23.83"}
	I1019 16:29:34.345873       1 controller.go:667] quota admission added evaluator for: namespaces
	I1019 16:29:34.466507       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.10.181"}
	I1019 16:29:34.481960       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.204.129"}
	E1019 16:29:34.615398       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:55370: use of closed network connection
	I1019 16:29:36.447894       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.100.19.161"}
	I1019 16:34:35.824119       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.110.88.91"}
	I1019 16:38:47.726123       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [5886e461a7b4a183fac570c337eba31221c4f8d80680f651e35b86312b3b4662] <==
	I1019 16:27:06.269895       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1019 16:27:06.270018       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1019 16:27:06.270100       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1019 16:27:06.270136       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1019 16:27:06.270243       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1019 16:27:06.270449       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1019 16:27:06.270505       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1019 16:27:06.270520       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1019 16:27:06.270493       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1019 16:27:06.270862       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1019 16:27:06.270975       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1019 16:27:06.271112       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1019 16:27:06.272386       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1019 16:27:06.273558       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1019 16:27:06.275820       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 16:27:06.278936       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1019 16:27:06.278986       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1019 16:27:06.279015       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1019 16:27:06.279024       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1019 16:27:06.279028       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1019 16:27:06.279131       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1019 16:27:06.284734       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="functional-507544" podCIDRs=["10.244.0.0/24"]
	I1019 16:27:06.285694       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 16:27:06.285701       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1019 16:27:51.226515       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [6f9d68db6d3d66d4b89c73b933115640996e2ad1584dc4733fec9eb8f8617cee] <==
	I1019 16:28:51.147320       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1019 16:28:51.147536       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1019 16:28:51.147633       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1019 16:28:51.148007       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1019 16:28:51.148025       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1019 16:28:51.149985       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1019 16:28:51.151271       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 16:28:51.152361       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1019 16:28:51.153569       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1019 16:28:51.153598       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1019 16:28:51.153610       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1019 16:28:51.153623       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1019 16:28:51.154771       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 16:28:51.154771       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1019 16:28:51.157958       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1019 16:28:51.160113       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1019 16:28:51.161272       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1019 16:28:51.163450       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1019 16:28:51.168958       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1019 16:29:34.405011       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1019 16:29:34.408954       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1019 16:29:34.412916       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1019 16:29:34.413153       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1019 16:29:34.416170       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1019 16:29:34.421693       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [c4f6db9ff93af54aa47d8330711b3aacaf2d4ed53868d324569d79661c893e86] <==
	I1019 16:27:07.757997       1 server_linux.go:53] "Using iptables proxy"
	I1019 16:27:07.829126       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 16:27:07.929844       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 16:27:07.929903       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1019 16:27:07.930034       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 16:27:07.948180       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 16:27:07.948245       1 server_linux.go:132] "Using iptables Proxier"
	I1019 16:27:07.953226       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 16:27:07.953625       1 server.go:527] "Version info" version="v1.34.1"
	I1019 16:27:07.953644       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 16:27:07.954969       1 config.go:309] "Starting node config controller"
	I1019 16:27:07.954988       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 16:27:07.954996       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 16:27:07.955009       1 config.go:200] "Starting service config controller"
	I1019 16:27:07.955013       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 16:27:07.955024       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 16:27:07.955040       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 16:27:07.955031       1 config.go:106] "Starting endpoint slice config controller"
	I1019 16:27:07.955093       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 16:27:08.055919       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1019 16:27:08.055962       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 16:27:08.055920       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [d105c28b8985b9425bc6dc11577787f84445b24a2501dbbf2c5479621ec7d4c5] <==
	E1019 16:28:14.055235       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-507544&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 16:28:14.925306       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-507544&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 16:28:17.992857       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-507544&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 16:28:21.964850       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-507544&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 16:28:42.435400       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-507544&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1019 16:29:05.054962       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 16:29:05.054999       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1019 16:29:05.055094       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 16:29:05.075205       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 16:29:05.075262       1 server_linux.go:132] "Using iptables Proxier"
	I1019 16:29:05.080962       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 16:29:05.081386       1 server.go:527] "Version info" version="v1.34.1"
	I1019 16:29:05.081402       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 16:29:05.082685       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 16:29:05.082707       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 16:29:05.082716       1 config.go:200] "Starting service config controller"
	I1019 16:29:05.082742       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 16:29:05.082836       1 config.go:106] "Starting endpoint slice config controller"
	I1019 16:29:05.082898       1 config.go:309] "Starting node config controller"
	I1019 16:29:05.082907       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 16:29:05.082920       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 16:29:05.083625       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 16:29:05.182940       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 16:29:05.183531       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 16:29:05.184724       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2996708114c16d776a6aff6f9fcc6dad7f1fef587753492e0a7a7981480bcf7c] <==
	I1019 16:28:46.382085       1 serving.go:386] Generated self-signed cert in-memory
	W1019 16:28:47.731959       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1019 16:28:47.732000       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1019 16:28:47.732014       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1019 16:28:47.732024       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1019 16:28:47.750471       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1019 16:28:47.750506       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 16:28:47.752862       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 16:28:47.752902       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 16:28:47.753175       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1019 16:28:47.753232       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1019 16:28:47.853949       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [61c5eb9cc5552cf24b68447b4ed0fbb9972f27fc884505108656b85504cc2ff2] <==
	E1019 16:26:59.303300       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1019 16:26:59.303338       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1019 16:26:59.303374       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1019 16:26:59.303445       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1019 16:26:59.303447       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1019 16:26:59.303520       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1019 16:26:59.303537       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1019 16:27:00.157669       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 16:27:00.187109       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1019 16:27:00.188168       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1019 16:27:00.235871       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1019 16:27:00.343078       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1019 16:27:00.351277       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1019 16:27:00.373492       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1019 16:27:00.375580       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1019 16:27:00.470255       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1019 16:27:00.471050       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1019 16:27:00.517324       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1019 16:27:00.899509       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 16:28:24.197112       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 16:28:24.197145       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1019 16:28:24.197196       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1019 16:28:24.197223       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1019 16:28:24.197236       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1019 16:28:24.197268       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 19 16:34:40 functional-507544 kubelet[4118]: E1019 16:34:40.901081    4118 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-mv5h7" podUID="e68c1276-cc7a-4036-91b9-ba15632cc2bf"
	Oct 19 16:34:41 functional-507544 kubelet[4118]: E1019 16:34:41.174893    4118 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://mysql:5.7: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-vgwqp" podUID="2a109cd2-82cc-4066-94eb-c9d0775cb362"
	Oct 19 16:34:52 functional-507544 kubelet[4118]: E1019 16:34:52.148503    4118 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-mv5h7" podUID="e68c1276-cc7a-4036-91b9-ba15632cc2bf"
	Oct 19 16:35:07 functional-507544 kubelet[4118]: E1019 16:35:07.148094    4118 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-mv5h7" podUID="e68c1276-cc7a-4036-91b9-ba15632cc2bf"
	Oct 19 16:35:19 functional-507544 kubelet[4118]: E1019 16:35:19.147711    4118 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-mv5h7" podUID="e68c1276-cc7a-4036-91b9-ba15632cc2bf"
	Oct 19 16:35:42 functional-507544 kubelet[4118]: E1019 16:35:42.190683    4118 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98
765c"
	Oct 19 16:35:42 functional-507544 kubelet[4118]: E1019 16:35:42.190781    4118 kuberuntime_image.go:43] "Failed to pull image" err="unable to pull image or OCI artifact: pull image err: initializing source docker://kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 19 16:35:42 functional-507544 kubelet[4118]: E1019 16:35:42.191031    4118 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-77bf4d6c4c-v4d72_kubernetes-dashboard(f72c10dc-e8cd-4faf-99bc-7c5d642cadce): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/inc
rease-rate-limit" logger="UnhandledError"
	Oct 19 16:35:42 functional-507544 kubelet[4118]: E1019 16:35:42.191152    4118 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-v4
d72" podUID="f72c10dc-e8cd-4faf-99bc-7c5d642cadce"
	Oct 19 16:35:55 functional-507544 kubelet[4118]: E1019 16:35:55.149305    4118 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests
: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-v4d72" podUID="f72c10dc-e8cd-4faf-99bc-7c5d642cadce"
	Oct 19 16:37:13 functional-507544 kubelet[4118]: E1019 16:37:13.936043    4118 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe
7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 19 16:37:13 functional-507544 kubelet[4118]: E1019 16:37:13.936144    4118 kuberuntime_image.go:43] "Failed to pull image" err="unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef
93"
	Oct 19 16:37:13 functional-507544 kubelet[4118]: E1019 16:37:13.936392    4118 kuberuntime_manager.go:1449] "Unhandled Error" err="container kubernetes-dashboard start failed in pod kubernetes-dashboard-855c9754f9-wqsqt_kubernetes-dashboard(cdb50ed8-0281-40b9-b352-190f2baaf640): ErrImagePull: unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit
. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 19 16:37:13 functional-507544 kubelet[4118]: E1019 16:37:13.936466    4118 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubern
etes-dashboard-855c9754f9-wqsqt" podUID="cdb50ed8-0281-40b9-b352-190f2baaf640"
	Oct 19 16:37:26 functional-507544 kubelet[4118]: E1019 16:37:26.149059    4118 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui
/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-wqsqt" podUID="cdb50ed8-0281-40b9-b352-190f2baaf640"
	Oct 19 16:38:15 functional-507544 kubelet[4118]: E1019 16:38:15.257939    4118 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 19 16:38:15 functional-507544 kubelet[4118]: E1019 16:38:15.258023    4118 kuberuntime_image.go:43] "Failed to pull image" err="unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 19 16:38:15 functional-507544 kubelet[4118]: E1019 16:38:15.258291    4118 kuberuntime_manager.go:1449] "Unhandled Error" err="container myfrontend start failed in pod sp-pod_default(f1eb7495-9064-4ffe-979e-857122179f13): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 19 16:38:15 functional-507544 kubelet[4118]: E1019 16:38:15.258364    4118 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="f1eb7495-9064-4ffe-979e-857122179f13"
	Oct 19 16:38:15 functional-507544 kubelet[4118]: E1019 16:38:15.258974    4118 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list" image="kicbase/echo-server:latest"
	Oct 19 16:38:15 functional-507544 kubelet[4118]: E1019 16:38:15.259020    4118 kuberuntime_image.go:43] "Failed to pull image" err="short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list" image="kicbase/echo-server:latest"
	Oct 19 16:38:15 functional-507544 kubelet[4118]: E1019 16:38:15.259176    4118 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-connect-7d85dfc575-cx7lv_default(661b3d8c-f0db-4f13-85dc-d37412da52f9): ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list" logger="UnhandledError"
	Oct 19 16:38:15 functional-507544 kubelet[4118]: E1019 16:38:15.259419    4118 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cx7lv" podUID="661b3d8c-f0db-4f13-85dc-d37412da52f9"
	Oct 19 16:38:15 functional-507544 kubelet[4118]: E1019 16:38:15.744689    4118 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cx7lv" podUID="661b3d8c-f0db-4f13-85dc-d37412da52f9"
	Oct 19 16:38:28 functional-507544 kubelet[4118]: E1019 16:38:28.148357    4118 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="f1eb7495-9064-4ffe-979e-857122179f13"
	
	
	==> storage-provisioner [1d5ff7dca36a908853bf80f1bdc7a8189fcd31580bdeff4098c2276ae5f95801] <==
	W1019 16:39:13.337341       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:39:15.341180       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:39:15.345886       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:39:17.349518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:39:17.353771       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:39:19.356896       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:39:19.361031       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:39:21.364222       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:39:21.370668       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:39:23.374172       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:39:23.378111       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:39:25.381220       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:39:25.386952       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:39:27.389713       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:39:27.393921       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:39:29.396869       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:39:29.400991       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:39:31.404501       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:39:31.409060       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:39:33.412441       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:39:33.417002       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:39:35.420580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:39:35.424773       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:39:37.427760       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:39:37.432210       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [b1a6aad7bccb7a4da8fd921d34e59bdef668c795f652e7e54ace1bc2adf761a6] <==
	I1019 16:28:13.955241       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1019 16:28:13.956728       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-507544 -n functional-507544
helpers_test.go:269: (dbg) Run:  kubectl --context functional-507544 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-mv5h7 hello-node-connect-7d85dfc575-cx7lv mysql-5bb876957f-vgwqp sp-pod dashboard-metrics-scraper-77bf4d6c4c-v4d72 kubernetes-dashboard-855c9754f9-wqsqt
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/MySQL]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-507544 describe pod busybox-mount hello-node-75c85bcc94-mv5h7 hello-node-connect-7d85dfc575-cx7lv mysql-5bb876957f-vgwqp sp-pod dashboard-metrics-scraper-77bf4d6c4c-v4d72 kubernetes-dashboard-855c9754f9-wqsqt
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-507544 describe pod busybox-mount hello-node-75c85bcc94-mv5h7 hello-node-connect-7d85dfc575-cx7lv mysql-5bb876957f-vgwqp sp-pod dashboard-metrics-scraper-77bf4d6c4c-v4d72 kubernetes-dashboard-855c9754f9-wqsqt: exit status 1 (97.77069ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-507544/192.168.49.2
	Start Time:       Sun, 19 Oct 2025 16:29:21 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.6
	IPs:
	  IP:  10.244.0.6
	Containers:
	  mount-munger:
	    Container ID:  cri-o://874c2ff14b97f5564fd2f8b7ea851875753a6ff2aca767743f234378aa18a8cf
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sun, 19 Oct 2025 16:29:24 +0000
	      Finished:     Sun, 19 Oct 2025 16:29:24 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pxtmk (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-pxtmk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-507544
	  Normal  Pulling    10m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 846ms (2.459s including waiting). Image size: 4631262 bytes.
	  Normal  Created    10m   kubelet            Created container: mount-munger
	  Normal  Started    10m   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-mv5h7
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-507544/192.168.49.2
	Start Time:       Sun, 19 Oct 2025 16:29:18 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n469f (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-n469f:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-mv5h7 to functional-507544
	  Warning  Failed     4m59s (x3 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     4m59s (x3 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    4m20s (x5 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m20s (x5 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    4m8s (x4 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-cx7lv
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-507544/192.168.49.2
	Start Time:       Sun, 19 Oct 2025 16:34:35 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.12
	IPs:
	  IP:           10.244.0.12
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qqmbb (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-qqmbb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  5m3s                default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-cx7lv to functional-507544
	  Warning  Failed     84s                 kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     84s                 kubelet            Error: ErrImagePull
	  Normal   BackOff    84s                 kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     84s                 kubelet            Error: ImagePullBackOff
	  Normal   Pulling    72s (x2 over 5m3s)  kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             mysql-5bb876957f-vgwqp
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-507544/192.168.49.2
	Start Time:       Sun, 19 Oct 2025 16:29:36 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:           10.244.0.11
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s558z (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-s558z:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/mysql-5bb876957f-vgwqp to functional-507544
	  Warning  Failed     4m59s                kubelet            Failed to pull image "docker.io/mysql:5.7": unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://mysql:5.7: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     4m59s                kubelet            Error: ErrImagePull
	  Normal   BackOff    4m58s                kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     4m58s                kubelet            Error: ImagePullBackOff
	  Normal   Pulling    4m46s (x2 over 10m)  kubelet            Pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-507544/192.168.49.2
	Start Time:       Sun, 19 Oct 2025 16:29:35 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q8m6v (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-q8m6v:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/sp-pod to functional-507544
	  Warning  Failed     84s (x2 over 6m30s)  kubelet            Failed to pull image "docker.io/nginx": unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     84s (x2 over 6m30s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    71s (x2 over 6m30s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     71s (x2 over 6m30s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    60s (x3 over 10m)    kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-v4d72" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-wqsqt" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-507544 describe pod busybox-mount hello-node-75c85bcc94-mv5h7 hello-node-connect-7d85dfc575-cx7lv mysql-5bb876957f-vgwqp sp-pod dashboard-metrics-scraper-77bf4d6c4c-v4d72 kubernetes-dashboard-855c9754f9-wqsqt: exit status 1
E1019 16:42:49.095299    7228 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:44:12.166221    7228 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- FAIL: TestFunctional/parallel/MySQL (602.76s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-507544 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-507544 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-mv5h7" [e68c1276-cc7a-4036-91b9-ba15632cc2bf] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-507544 -n functional-507544
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-10-19 16:39:18.855606114 +0000 UTC m=+1143.910872419
functional_test.go:1460: (dbg) Run:  kubectl --context functional-507544 describe po hello-node-75c85bcc94-mv5h7 -n default
functional_test.go:1460: (dbg) kubectl --context functional-507544 describe po hello-node-75c85bcc94-mv5h7 -n default:
Name:             hello-node-75c85bcc94-mv5h7
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-507544/192.168.49.2
Start Time:       Sun, 19 Oct 2025 16:29:18 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n469f (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-n469f:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/hello-node-75c85bcc94-mv5h7 to functional-507544
Warning  Failed     4m38s (x3 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     4m38s (x3 over 10m)    kubelet            Error: ErrImagePull
Normal   BackOff    3m59s (x5 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     3m59s (x5 over 9m59s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    3m47s (x4 over 10m)    kubelet            Pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-507544 logs hello-node-75c85bcc94-mv5h7 -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-507544 logs hello-node-75c85bcc94-mv5h7 -n default: exit status 1 (71.132347ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-mv5h7" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-507544 logs hello-node-75c85bcc94-mv5h7 -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 image load --daemon kicbase/echo-server:functional-507544 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-507544" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 image load --daemon kicbase/echo-server:functional-507544 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-507544" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-507544
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 image load --daemon kicbase/echo-server:functional-507544 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-507544" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 image save kicbase/echo-server:functional-507544 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1019 16:29:35.831191   46592 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:29:35.831533   46592 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:29:35.831554   46592 out.go:374] Setting ErrFile to fd 2...
	I1019 16:29:35.831560   46592 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:29:35.831844   46592 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3731/.minikube/bin
	I1019 16:29:35.832715   46592 config.go:182] Loaded profile config "functional-507544": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:29:35.832852   46592 config.go:182] Loaded profile config "functional-507544": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:29:35.833304   46592 cli_runner.go:164] Run: docker container inspect functional-507544 --format={{.State.Status}}
	I1019 16:29:35.854005   46592 ssh_runner.go:195] Run: systemctl --version
	I1019 16:29:35.854054   46592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-507544
	I1019 16:29:35.872573   46592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/functional-507544/id_rsa Username:docker}
	I1019 16:29:35.969085   46592 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W1019 16:29:35.969164   46592 cache_images.go:255] Failed to load cached images for "functional-507544": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I1019 16:29:35.969200   46592 cache_images.go:267] failed pushing to: functional-507544

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-507544
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 image save --daemon kicbase/echo-server:functional-507544 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-507544
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-507544: exit status 1 (18.851771ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-507544

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-507544

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-507544 service --namespace=default --https --url hello-node: exit status 115 (543.750586ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:30513
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-507544 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-507544 service hello-node --url --format={{.IP}}: exit status 115 (540.68225ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-507544 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-507544 service hello-node --url: exit status 115 (536.8744ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:30513
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-507544 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30513
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.2s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-663592 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-663592 --output=json --user=testUser: exit status 80 (2.202514408s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0f7647c5-ce6f-47f3-98d9-522717563c64","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-663592 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"d57b9832-42ab-4b4c-ad9c-ed75cc03468d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-19T16:52:47Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"845a0e74-5823-430c-aa42-6015715deb91","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-663592 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.20s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.63s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-663592 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-663592 --output=json --user=testUser: exit status 80 (1.624741046s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"501c1eb2-30bf-48d7-be46-4e2ff4731ea0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-663592 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"df0d844e-bee6-4aac-8685-7b8a59edbead","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-19T16:52:49Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"0fa2896c-ce23-45ea-bab1-55039ab1841d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-663592 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.63s)

                                                
                                    
x
+
TestPreload (427.89s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-764642 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-764642 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (47.653186802s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-764642 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-764642 image pull gcr.io/k8s-minikube/busybox: (1.008194719s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-764642
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-764642: (5.993143694s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-764642 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1019 17:02:49.094713    7228 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:04:18.530264    7228 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/functional-507544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:05:41.595360    7228 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/functional-507544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:07:49.094560    7228 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p test-preload-764642 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: exit status 80 (6m9.464714141s)

                                                
                                                
-- stdout --
	* [test-preload-764642] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-3731/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-3731/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	* Using the docker driver based on existing profile
	* Starting "test-preload-764642" primary control-plane node in "test-preload-764642" cluster
	* Pulling base image v0.0.48-1760609789-21757 ...
	* Downloading Kubernetes v1.32.0 preload ...
	* Preparing Kubernetes v1.32.0 on CRI-O 1.34.1 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 17:02:22.790163  170457 out.go:360] Setting OutFile to fd 1 ...
	I1019 17:02:22.790281  170457 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:02:22.790292  170457 out.go:374] Setting ErrFile to fd 2...
	I1019 17:02:22.790299  170457 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:02:22.790550  170457 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3731/.minikube/bin
	I1019 17:02:22.791020  170457 out.go:368] Setting JSON to false
	I1019 17:02:22.791932  170457 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2689,"bootTime":1760890654,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 17:02:22.792022  170457 start.go:143] virtualization: kvm guest
	I1019 17:02:22.794262  170457 out.go:179] * [test-preload-764642] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 17:02:22.795675  170457 notify.go:221] Checking for updates...
	I1019 17:02:22.795697  170457 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 17:02:22.796951  170457 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 17:02:22.798145  170457 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-3731/kubeconfig
	I1019 17:02:22.799300  170457 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-3731/.minikube
	I1019 17:02:22.800406  170457 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 17:02:22.801600  170457 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 17:02:22.803318  170457 config.go:182] Loaded profile config "test-preload-764642": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1019 17:02:22.804993  170457 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1019 17:02:22.806169  170457 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 17:02:22.829623  170457 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1019 17:02:22.829779  170457 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:02:22.888327  170457 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-19 17:02:22.877604145 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 17:02:22.888461  170457 docker.go:319] overlay module found
	I1019 17:02:22.890165  170457 out.go:179] * Using the docker driver based on existing profile
	I1019 17:02:22.891411  170457 start.go:309] selected driver: docker
	I1019 17:02:22.891429  170457 start.go:930] validating driver "docker" against &{Name:test-preload-764642 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-764642 Namespace:default APIServerHAVIP: APIServ
erName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:02:22.891536  170457 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 17:02:22.892266  170457 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:02:22.950376  170457 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-19 17:02:22.9409471 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 17:02:22.950656  170457 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 17:02:22.950683  170457 cni.go:84] Creating CNI manager for ""
	I1019 17:02:22.950746  170457 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:02:22.950796  170457 start.go:353] cluster config:
	{Name:test-preload-764642 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-764642 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVM
netClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:02:22.953552  170457 out.go:179] * Starting "test-preload-764642" primary control-plane node in "test-preload-764642" cluster
	I1019 17:02:22.954702  170457 cache.go:124] Beginning downloading kic base image for docker with crio
	I1019 17:02:22.956503  170457 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 17:02:22.957462  170457 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1019 17:02:22.957573  170457 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 17:02:22.977454  170457 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 17:02:22.977474  170457 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 17:02:22.983209  170457 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1019 17:02:22.983249  170457 cache.go:59] Caching tarball of preloaded images
	I1019 17:02:22.983371  170457 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1019 17:02:22.985114  170457 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I1019 17:02:22.986401  170457 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1019 17:02:23.027705  170457 preload.go:290] Got checksum from GCS API "2acdb4dde52794f2167c79dcee7507ae"
	I1019 17:02:23.027757  170457 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21683-3731/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1019 17:02:25.395540  170457 cache.go:62] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1019 17:02:25.395691  170457 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/test-preload-764642/config.json ...
	I1019 17:02:25.395907  170457 cache.go:233] Successfully downloaded all kic artifacts
	I1019 17:02:25.395943  170457 start.go:360] acquireMachinesLock for test-preload-764642: {Name:mkcf3cadf84b7ebd663f42d50c88555e2bbe85f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 17:02:25.396010  170457 start.go:364] duration metric: took 45.815µs to acquireMachinesLock for "test-preload-764642"
	I1019 17:02:25.396030  170457 start.go:96] Skipping create...Using existing machine configuration
	I1019 17:02:25.396036  170457 fix.go:54] fixHost starting: 
	I1019 17:02:25.396280  170457 cli_runner.go:164] Run: docker container inspect test-preload-764642 --format={{.State.Status}}
	I1019 17:02:25.413876  170457 fix.go:112] recreateIfNeeded on test-preload-764642: state=Stopped err=<nil>
	W1019 17:02:25.413908  170457 fix.go:138] unexpected machine state, will restart: <nil>
	I1019 17:02:25.416811  170457 out.go:252] * Restarting existing docker container for "test-preload-764642" ...
	I1019 17:02:25.416876  170457 cli_runner.go:164] Run: docker start test-preload-764642
	I1019 17:02:25.654524  170457 cli_runner.go:164] Run: docker container inspect test-preload-764642 --format={{.State.Status}}
	I1019 17:02:25.674291  170457 kic.go:430] container "test-preload-764642" state is running.
	I1019 17:02:25.674639  170457 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-764642
	I1019 17:02:25.692955  170457 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/test-preload-764642/config.json ...
	I1019 17:02:25.693223  170457 machine.go:94] provisionDockerMachine start ...
	I1019 17:02:25.693286  170457 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-764642
	I1019 17:02:25.711999  170457 main.go:143] libmachine: Using SSH client type: native
	I1019 17:02:25.712345  170457 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32958 <nil> <nil>}
	I1019 17:02:25.712364  170457 main.go:143] libmachine: About to run SSH command:
	hostname
	I1019 17:02:25.712974  170457 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46656->127.0.0.1:32958: read: connection reset by peer
	I1019 17:02:28.845426  170457 main.go:143] libmachine: SSH cmd err, output: <nil>: test-preload-764642
	
	I1019 17:02:28.845458  170457 ubuntu.go:182] provisioning hostname "test-preload-764642"
	I1019 17:02:28.845525  170457 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-764642
	I1019 17:02:28.864222  170457 main.go:143] libmachine: Using SSH client type: native
	I1019 17:02:28.864447  170457 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32958 <nil> <nil>}
	I1019 17:02:28.864461  170457 main.go:143] libmachine: About to run SSH command:
	sudo hostname test-preload-764642 && echo "test-preload-764642" | sudo tee /etc/hostname
	I1019 17:02:29.007094  170457 main.go:143] libmachine: SSH cmd err, output: <nil>: test-preload-764642
	
	I1019 17:02:29.007179  170457 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-764642
	I1019 17:02:29.025461  170457 main.go:143] libmachine: Using SSH client type: native
	I1019 17:02:29.025747  170457 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32958 <nil> <nil>}
	I1019 17:02:29.025774  170457 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-764642' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-764642/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-764642' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 17:02:29.158801  170457 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1019 17:02:29.158835  170457 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-3731/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-3731/.minikube}
	I1019 17:02:29.158867  170457 ubuntu.go:190] setting up certificates
	I1019 17:02:29.158878  170457 provision.go:84] configureAuth start
	I1019 17:02:29.158930  170457 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-764642
	I1019 17:02:29.177139  170457 provision.go:143] copyHostCerts
	I1019 17:02:29.177202  170457 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-3731/.minikube/ca.pem, removing ...
	I1019 17:02:29.177221  170457 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-3731/.minikube/ca.pem
	I1019 17:02:29.177294  170457 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-3731/.minikube/ca.pem (1082 bytes)
	I1019 17:02:29.177432  170457 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-3731/.minikube/cert.pem, removing ...
	I1019 17:02:29.177448  170457 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-3731/.minikube/cert.pem
	I1019 17:02:29.177491  170457 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-3731/.minikube/cert.pem (1123 bytes)
	I1019 17:02:29.177577  170457 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-3731/.minikube/key.pem, removing ...
	I1019 17:02:29.177587  170457 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-3731/.minikube/key.pem
	I1019 17:02:29.177624  170457 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-3731/.minikube/key.pem (1679 bytes)
	I1019 17:02:29.177695  170457 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-3731/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca-key.pem org=jenkins.test-preload-764642 san=[127.0.0.1 192.168.76.2 localhost minikube test-preload-764642]
	I1019 17:02:29.309892  170457 provision.go:177] copyRemoteCerts
	I1019 17:02:29.309948  170457 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 17:02:29.309985  170457 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-764642
	I1019 17:02:29.327548  170457 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/test-preload-764642/id_rsa Username:docker}
	I1019 17:02:29.423324  170457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 17:02:29.440958  170457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 17:02:29.457888  170457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1019 17:02:29.475255  170457 provision.go:87] duration metric: took 316.362601ms to configureAuth
	I1019 17:02:29.475288  170457 ubuntu.go:206] setting minikube options for container-runtime
	I1019 17:02:29.475452  170457 config.go:182] Loaded profile config "test-preload-764642": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1019 17:02:29.475548  170457 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-764642
	I1019 17:02:29.492917  170457 main.go:143] libmachine: Using SSH client type: native
	I1019 17:02:29.493145  170457 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32958 <nil> <nil>}
	I1019 17:02:29.493165  170457 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 17:02:29.763966  170457 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 17:02:29.763990  170457 machine.go:97] duration metric: took 4.07075091s to provisionDockerMachine
	I1019 17:02:29.764005  170457 start.go:293] postStartSetup for "test-preload-764642" (driver="docker")
	I1019 17:02:29.764019  170457 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 17:02:29.764112  170457 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 17:02:29.764172  170457 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-764642
	I1019 17:02:29.782442  170457 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/test-preload-764642/id_rsa Username:docker}
	I1019 17:02:29.878698  170457 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 17:02:29.882457  170457 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 17:02:29.882490  170457 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 17:02:29.882502  170457 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-3731/.minikube/addons for local assets ...
	I1019 17:02:29.882556  170457 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-3731/.minikube/files for local assets ...
	I1019 17:02:29.882651  170457 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-3731/.minikube/files/etc/ssl/certs/72282.pem -> 72282.pem in /etc/ssl/certs
	I1019 17:02:29.882741  170457 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 17:02:29.890680  170457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/files/etc/ssl/certs/72282.pem --> /etc/ssl/certs/72282.pem (1708 bytes)
	I1019 17:02:29.908204  170457 start.go:296] duration metric: took 144.185723ms for postStartSetup
	I1019 17:02:29.908271  170457 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 17:02:29.908321  170457 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-764642
	I1019 17:02:29.926673  170457 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/test-preload-764642/id_rsa Username:docker}
	I1019 17:02:30.019363  170457 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 17:02:30.023764  170457 fix.go:56] duration metric: took 4.627721034s for fixHost
	I1019 17:02:30.023789  170457 start.go:83] releasing machines lock for "test-preload-764642", held for 4.627765183s
	I1019 17:02:30.023862  170457 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-764642
	I1019 17:02:30.041267  170457 ssh_runner.go:195] Run: cat /version.json
	I1019 17:02:30.041302  170457 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 17:02:30.041317  170457 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-764642
	I1019 17:02:30.041352  170457 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-764642
	I1019 17:02:30.060560  170457 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/test-preload-764642/id_rsa Username:docker}
	I1019 17:02:30.061526  170457 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/test-preload-764642/id_rsa Username:docker}
	I1019 17:02:30.213779  170457 ssh_runner.go:195] Run: systemctl --version
	I1019 17:02:30.220216  170457 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 17:02:30.254648  170457 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 17:02:30.259338  170457 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 17:02:30.259413  170457 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 17:02:30.267704  170457 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1019 17:02:30.267724  170457 start.go:496] detecting cgroup driver to use...
	I1019 17:02:30.267761  170457 detect.go:190] detected "systemd" cgroup driver on host os
	I1019 17:02:30.267809  170457 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 17:02:30.282004  170457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 17:02:30.294133  170457 docker.go:218] disabling cri-docker service (if available) ...
	I1019 17:02:30.294186  170457 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 17:02:30.308400  170457 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 17:02:30.320466  170457 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 17:02:30.401991  170457 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 17:02:30.482748  170457 docker.go:234] disabling docker service ...
	I1019 17:02:30.482806  170457 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 17:02:30.497469  170457 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 17:02:30.510639  170457 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 17:02:30.592582  170457 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 17:02:30.673284  170457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 17:02:30.685585  170457 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 17:02:30.699717  170457 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1019 17:02:30.699770  170457 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:02:30.708516  170457 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1019 17:02:30.708582  170457 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:02:30.717262  170457 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:02:30.725858  170457 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:02:30.734495  170457 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 17:02:30.742481  170457 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:02:30.751450  170457 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:02:30.760126  170457 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:02:30.768671  170457 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 17:02:30.775855  170457 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 17:02:30.783289  170457 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:02:30.859327  170457 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 17:02:30.969001  170457 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 17:02:30.969086  170457 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 17:02:30.973253  170457 start.go:564] Will wait 60s for crictl version
	I1019 17:02:30.973308  170457 ssh_runner.go:195] Run: which crictl
	I1019 17:02:30.977140  170457 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 17:02:31.000870  170457 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 17:02:31.000962  170457 ssh_runner.go:195] Run: crio --version
	I1019 17:02:31.028021  170457 ssh_runner.go:195] Run: crio --version
	I1019 17:02:31.057141  170457 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.34.1 ...
	I1019 17:02:31.058535  170457 cli_runner.go:164] Run: docker network inspect test-preload-764642 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 17:02:31.075805  170457 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1019 17:02:31.079980  170457 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:02:31.090200  170457 kubeadm.go:884] updating cluster {Name:test-preload-764642 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-764642 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics
:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 17:02:31.090290  170457 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1019 17:02:31.090327  170457 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:02:31.120443  170457 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:02:31.120464  170457 crio.go:433] Images already preloaded, skipping extraction
	I1019 17:02:31.120507  170457 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:02:31.145354  170457 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:02:31.145376  170457 cache_images.go:86] Images are preloaded, skipping loading
	I1019 17:02:31.145382  170457 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.32.0 crio true true} ...
	I1019 17:02:31.145476  170457 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=test-preload-764642 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-764642 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 17:02:31.145588  170457 ssh_runner.go:195] Run: crio config
	I1019 17:02:31.188674  170457 cni.go:84] Creating CNI manager for ""
	I1019 17:02:31.188694  170457 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:02:31.188709  170457 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 17:02:31.188730  170457 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-764642 NodeName:test-preload-764642 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 17:02:31.188840  170457 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-764642"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 17:02:31.188900  170457 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1019 17:02:31.197143  170457 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 17:02:31.197212  170457 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 17:02:31.205100  170457 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I1019 17:02:31.217673  170457 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 17:02:31.230231  170457 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1019 17:02:31.242722  170457 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1019 17:02:31.246385  170457 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:02:31.256411  170457 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:02:31.333607  170457 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:02:31.358614  170457 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/test-preload-764642 for IP: 192.168.76.2
	I1019 17:02:31.358634  170457 certs.go:195] generating shared ca certs ...
	I1019 17:02:31.358648  170457 certs.go:227] acquiring lock for ca certs: {Name:mk4c4a2dfd94a54e3626e99fce4b6f5183eeaf4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:02:31.358792  170457 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-3731/.minikube/ca.key
	I1019 17:02:31.358848  170457 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-3731/.minikube/proxy-client-ca.key
	I1019 17:02:31.358861  170457 certs.go:257] generating profile certs ...
	I1019 17:02:31.358968  170457 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/test-preload-764642/client.key
	I1019 17:02:31.359020  170457 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/test-preload-764642/apiserver.key.d7385673
	I1019 17:02:31.359095  170457 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/test-preload-764642/proxy-client.key
	I1019 17:02:31.359259  170457 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/7228.pem (1338 bytes)
	W1019 17:02:31.359296  170457 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-3731/.minikube/certs/7228_empty.pem, impossibly tiny 0 bytes
	I1019 17:02:31.359302  170457 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca-key.pem (1675 bytes)
	I1019 17:02:31.359331  170457 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem (1082 bytes)
	I1019 17:02:31.359364  170457 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/cert.pem (1123 bytes)
	I1019 17:02:31.359394  170457 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/key.pem (1679 bytes)
	I1019 17:02:31.359444  170457 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/files/etc/ssl/certs/72282.pem (1708 bytes)
	I1019 17:02:31.360096  170457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 17:02:31.379228  170457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1019 17:02:31.399031  170457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 17:02:31.419871  170457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1019 17:02:31.444332  170457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/test-preload-764642/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1019 17:02:31.462166  170457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/test-preload-764642/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I1019 17:02:31.479712  170457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/test-preload-764642/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 17:02:31.496728  170457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/test-preload-764642/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1019 17:02:31.514444  170457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/files/etc/ssl/certs/72282.pem --> /usr/share/ca-certificates/72282.pem (1708 bytes)
	I1019 17:02:31.532377  170457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 17:02:31.550790  170457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/certs/7228.pem --> /usr/share/ca-certificates/7228.pem (1338 bytes)
	I1019 17:02:31.567856  170457 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 17:02:31.580677  170457 ssh_runner.go:195] Run: openssl version
	I1019 17:02:31.586938  170457 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7228.pem && ln -fs /usr/share/ca-certificates/7228.pem /etc/ssl/certs/7228.pem"
	I1019 17:02:31.595977  170457 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7228.pem
	I1019 17:02:31.599876  170457 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 16:26 /usr/share/ca-certificates/7228.pem
	I1019 17:02:31.599933  170457 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7228.pem
	I1019 17:02:31.634535  170457 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7228.pem /etc/ssl/certs/51391683.0"
	I1019 17:02:31.642831  170457 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/72282.pem && ln -fs /usr/share/ca-certificates/72282.pem /etc/ssl/certs/72282.pem"
	I1019 17:02:31.651139  170457 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/72282.pem
	I1019 17:02:31.654754  170457 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 16:26 /usr/share/ca-certificates/72282.pem
	I1019 17:02:31.654811  170457 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/72282.pem
	I1019 17:02:31.688229  170457 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/72282.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 17:02:31.696326  170457 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 17:02:31.705413  170457 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:02:31.709549  170457 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 16:21 /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:02:31.709631  170457 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:02:31.744417  170457 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 17:02:31.753062  170457 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 17:02:31.756954  170457 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1019 17:02:31.791898  170457 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1019 17:02:31.825854  170457 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1019 17:02:31.862015  170457 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1019 17:02:31.906665  170457 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1019 17:02:31.957109  170457 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1019 17:02:31.994259  170457 kubeadm.go:401] StartCluster: {Name:test-preload-764642 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-764642 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:02:31.994380  170457 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 17:02:31.994454  170457 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 17:02:32.024156  170457 cri.go:89] found id: "efcc7f9f584733beb448fd7c42f2d2dd702c7e1f67af89218a20677b80ebf7a1"
	I1019 17:02:32.024180  170457 cri.go:89] found id: "6eb1c676037c8b10318b2b1fa1ad6fb08228ef713c8948af966f5dc421e5e59b"
	I1019 17:02:32.024186  170457 cri.go:89] found id: ""
	I1019 17:02:32.024240  170457 ssh_runner.go:195] Run: sudo runc list -f json
	W1019 17:02:32.036856  170457 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:02:32Z" level=error msg="open /run/runc: no such file or directory"
	I1019 17:02:32.036947  170457 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 17:02:32.044951  170457 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1019 17:02:32.044973  170457 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1019 17:02:32.045025  170457 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1019 17:02:32.052554  170457 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1019 17:02:32.052958  170457 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-764642" does not appear in /home/jenkins/minikube-integration/21683-3731/kubeconfig
	I1019 17:02:32.053058  170457 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-3731/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-764642" cluster setting kubeconfig missing "test-preload-764642" context setting]
	I1019 17:02:32.053421  170457 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/kubeconfig: {Name:mk4cdfafbafeae33568865a60cf929c656d55c5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:02:32.053893  170457 kapi.go:59] client config for test-preload-764642: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-3731/.minikube/profiles/test-preload-764642/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-3731/.minikube/profiles/test-preload-764642/client.key", CAFile:"/home/jenkins/minikube-integration/21683-3731/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819d20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1019 17:02:32.054300  170457 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1019 17:02:32.054315  170457 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1019 17:02:32.054320  170457 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1019 17:02:32.054324  170457 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1019 17:02:32.054327  170457 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1019 17:02:32.054625  170457 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1019 17:02:32.062215  170457 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1019 17:02:32.062242  170457 kubeadm.go:602] duration metric: took 17.263069ms to restartPrimaryControlPlane
	I1019 17:02:32.062250  170457 kubeadm.go:403] duration metric: took 68.002901ms to StartCluster
	I1019 17:02:32.062263  170457 settings.go:142] acquiring lock: {Name:mk205bd8d663b4a1850184d0f589700c0d2429b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:02:32.062334  170457 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-3731/kubeconfig
	I1019 17:02:32.062893  170457 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/kubeconfig: {Name:mk4cdfafbafeae33568865a60cf929c656d55c5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:02:32.063165  170457 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 17:02:32.063234  170457 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 17:02:32.063347  170457 addons.go:70] Setting storage-provisioner=true in profile "test-preload-764642"
	I1019 17:02:32.063363  170457 addons.go:239] Setting addon storage-provisioner=true in "test-preload-764642"
	W1019 17:02:32.063372  170457 addons.go:248] addon storage-provisioner should already be in state true
	I1019 17:02:32.063388  170457 addons.go:70] Setting default-storageclass=true in profile "test-preload-764642"
	I1019 17:02:32.063404  170457 host.go:66] Checking if "test-preload-764642" exists ...
	I1019 17:02:32.063418  170457 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "test-preload-764642"
	I1019 17:02:32.063418  170457 config.go:182] Loaded profile config "test-preload-764642": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1019 17:02:32.063741  170457 cli_runner.go:164] Run: docker container inspect test-preload-764642 --format={{.State.Status}}
	I1019 17:02:32.063948  170457 cli_runner.go:164] Run: docker container inspect test-preload-764642 --format={{.State.Status}}
	I1019 17:02:32.066010  170457 out.go:179] * Verifying Kubernetes components...
	I1019 17:02:32.067215  170457 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:02:32.086098  170457 kapi.go:59] client config for test-preload-764642: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-3731/.minikube/profiles/test-preload-764642/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-3731/.minikube/profiles/test-preload-764642/client.key", CAFile:"/home/jenkins/minikube-integration/21683-3731/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819d20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1019 17:02:32.086210  170457 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 17:02:32.086500  170457 addons.go:239] Setting addon default-storageclass=true in "test-preload-764642"
	W1019 17:02:32.086523  170457 addons.go:248] addon default-storageclass should already be in state true
	I1019 17:02:32.086553  170457 host.go:66] Checking if "test-preload-764642" exists ...
	I1019 17:02:32.087092  170457 cli_runner.go:164] Run: docker container inspect test-preload-764642 --format={{.State.Status}}
	I1019 17:02:32.087826  170457 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:02:32.087848  170457 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 17:02:32.087905  170457 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-764642
	I1019 17:02:32.115136  170457 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 17:02:32.115160  170457 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 17:02:32.115219  170457 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-764642
	I1019 17:02:32.116880  170457 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/test-preload-764642/id_rsa Username:docker}
	I1019 17:02:32.139566  170457 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/test-preload-764642/id_rsa Username:docker}
	I1019 17:02:32.184456  170457 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:02:32.197803  170457 node_ready.go:35] waiting up to 6m0s for node "test-preload-764642" to be "Ready" ...
	I1019 17:02:32.223468  170457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:02:32.248036  170457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1019 17:02:42.199238  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": net/http: TLS handshake timeout
	W1019 17:02:52.200308  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": net/http: TLS handshake timeout
	I1019 17:02:52.286927  170457 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (20.063421483s)
	W1019 17:02:52.286962  170457 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:02:52.286985  170457 retry.go:31] will retry after 341.789128ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:02:52.320908  170457 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (20.072831057s)
	W1019 17:02:52.320949  170457 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:02:52.320965  170457 retry.go:31] will retry after 301.292723ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:02:52.622454  170457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1019 17:02:52.629101  170457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1019 17:03:03.745330  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": net/http: TLS handshake timeout - error from a previous attempt: read tcp 192.168.76.1:56780->192.168.76.2:8443: read: connection reset by peer
	W1019 17:03:13.746402  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": net/http: TLS handshake timeout
	I1019 17:03:13.750404  170457 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (21.127910865s)
	I1019 17:03:13.750439  170457 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (21.121311024s)
	W1019 17:03:13.750458  170457 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:03:13.750477  170457 retry.go:31] will retry after 286.161608ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	W1019 17:03:13.750441  170457 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:03:13.750498  170457 retry.go:31] will retry after 345.623154ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:03:14.037002  170457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:03:14.096716  170457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1019 17:03:15.131282  170457 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.09423771s)
	W1019 17:03:15.131321  170457 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:03:15.131335  170457 retry.go:31] will retry after 754.569562ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:03:15.131335  170457 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (1.034573163s)
	W1019 17:03:15.131363  170457 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:03:15.131385  170457 retry.go:31] will retry after 766.782295ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:03:15.886460  170457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:03:15.899149  170457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1019 17:03:15.943800  170457 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:03:15.943835  170457 retry.go:31] will retry after 992.301835ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1019 17:03:15.956419  170457 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:03:15.956448  170457 retry.go:31] will retry after 847.560285ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1019 17:03:16.199323  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	I1019 17:03:16.804695  170457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1019 17:03:16.860171  170457 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:03:16.860209  170457 retry.go:31] will retry after 1.74293922s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:03:16.936331  170457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1019 17:03:16.991653  170457 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:03:16.991684  170457 retry.go:31] will retry after 885.009476ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:03:17.876907  170457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1019 17:03:17.933974  170457 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:03:17.934012  170457 retry.go:31] will retry after 2.191270849s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:03:18.603412  170457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1019 17:03:18.659827  170457 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:03:18.659862  170457 retry.go:31] will retry after 1.527929793s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1019 17:03:18.698332  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	I1019 17:03:20.125899  170457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1019 17:03:20.181044  170457 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:03:20.181093  170457 retry.go:31] will retry after 2.351921748s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:03:20.188272  170457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1019 17:03:20.243334  170457 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:03:20.243389  170457 retry.go:31] will retry after 2.231694799s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1019 17:03:20.699385  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	I1019 17:03:22.476057  170457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1019 17:03:22.530849  170457 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:03:22.530883  170457 retry.go:31] will retry after 4.671029691s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:03:22.534040  170457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1019 17:03:22.588978  170457 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:03:22.589009  170457 retry.go:31] will retry after 2.706627471s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1019 17:03:23.198936  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:03:25.199126  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	I1019 17:03:25.296301  170457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1019 17:03:25.353652  170457 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:03:25.353684  170457 retry.go:31] will retry after 7.117580556s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:03:27.202922  170457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1019 17:03:27.259464  170457 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:03:27.259498  170457 retry.go:31] will retry after 5.610702604s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1019 17:03:27.698399  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	I1019 17:03:32.471535  170457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:03:32.870729  170457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1019 17:03:39.700429  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": net/http: TLS handshake timeout
	W1019 17:03:49.702523  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": net/http: TLS handshake timeout
	I1019 17:03:50.717308  170457 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (18.245723536s)
	W1019 17:03:50.717353  170457 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:39626->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:03:50.717356  170457 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (17.846592202s)
	I1019 17:03:50.717374  170457 retry.go:31] will retry after 13.299653801s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:39626->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	W1019 17:03:50.717377  170457 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:39638->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:03:50.717393  170457 retry.go:31] will retry after 13.66811691s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:39638->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	W1019 17:03:52.198803  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:03:54.698605  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:03:56.699130  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:03:59.198763  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:04:01.698604  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:04:03.699047  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	I1019 17:04:04.017428  170457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1019 17:04:04.074400  170457 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:04:04.074432  170457 retry.go:31] will retry after 14.584149419s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:04:04.386451  170457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1019 17:04:04.443208  170457 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:04:04.443246  170457 retry.go:31] will retry after 13.014964721s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1019 17:04:06.198511  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:04:08.698514  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	I1019 17:04:17.459296  170457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1019 17:04:18.659279  170457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1019 17:04:20.700121  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": net/http: TLS handshake timeout
	W1019 17:04:30.701120  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": net/http: TLS handshake timeout
	I1019 17:04:31.957202  170457 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (14.497866675s)
	W1019 17:04:31.957235  170457 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45624->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:04:31.957288  170457 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (13.29796714s)
	W1019 17:04:31.957321  170457 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45630->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	W1019 17:04:31.957349  170457 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45624->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45624->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1019 17:04:31.957390  170457 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45630->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45630->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1019 17:04:31.960038  170457 out.go:179] * Enabled addons: 
	I1019 17:04:31.961638  170457 addons.go:515] duration metric: took 1m59.898405376s for enable addons: enabled=[]
	W1019 17:04:33.198456  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:04:35.198885  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:04:37.698620  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:04:39.699468  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:04:42.198999  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:04:44.698806  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:04:46.699123  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:04:49.199135  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:04:51.698743  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:04:54.198578  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:04:56.198757  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:04:58.698512  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:05:00.699112  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:05:03.198810  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:05:05.698669  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:05:07.699357  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:05:10.199112  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:05:12.698603  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:05:15.198585  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:05:17.199427  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:05:27.703495  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": net/http: TLS handshake timeout
	W1019 17:05:37.708540  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": net/http: TLS handshake timeout
	W1019 17:05:40.198602  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:05:42.199050  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:05:44.698759  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:05:47.198542  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:05:49.199372  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:05:51.698959  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:05:54.198757  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:05:56.698561  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:05:58.699219  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:06:01.198809  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:06:03.698748  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:06:06.198463  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:06:08.198819  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:06:10.698519  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:06:12.699218  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:06:15.198651  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:06:17.199196  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:06:19.698929  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:06:22.198864  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:06:24.698584  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:06:26.699032  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:06:29.198693  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:06:31.199140  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:06:33.698327  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:06:35.699407  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:06:38.198625  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:06:40.698457  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:06:42.698784  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:06:44.699452  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:06:47.198517  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:06:49.198625  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:06:51.698489  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:06:53.699131  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:06:55.699294  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:06:57.699442  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:07:00.199478  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:07:02.698593  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:07:14.700414  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": net/http: TLS handshake timeout
	W1019 17:07:24.702559  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": net/http: TLS handshake timeout
	W1019 17:07:27.199431  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:07:29.698568  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:07:31.699150  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:07:34.198803  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:07:36.698567  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:07:38.699168  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:07:41.198905  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:07:43.698700  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:07:45.699444  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:07:48.199019  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:07:50.698652  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:07:52.699248  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:07:55.198843  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:07:57.698602  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:07:59.699332  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:08:02.198827  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:08:04.698364  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:08:06.698641  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:08:08.699103  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:08:11.198580  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:08:13.198981  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:08:15.698607  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:08:17.699205  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:08:20.198713  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:08:22.698658  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:08:25.198631  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:08:27.698678  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:08:30.198507  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	I1019 17:08:32.198222  170457 node_ready.go:38] duration metric: took 6m0.000376108s for node "test-preload-764642" to be "Ready" ...
	I1019 17:08:32.200485  170457 out.go:203] 
	W1019 17:08:32.202127  170457 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1019 17:08:32.202150  170457 out.go:285] * 
	* 
	W1019 17:08:32.203967  170457 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 17:08:32.205697  170457 out.go:203] 

                                                
                                                
** /stderr **
preload_test.go:67: out/minikube-linux-amd64 start -p test-preload-764642 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio failed: exit status 80
panic.go:636: *** TestPreload FAILED at 2025-10-19 17:08:32.240651646 +0000 UTC m=+2897.295917975
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPreload]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect test-preload-764642
helpers_test.go:243: (dbg) docker inspect test-preload-764642:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "833f25ba1f3ff5e320e19ef8563dece08c130e748dd0203930bbbfe45852308c",
	        "Created": "2025-10-19T17:01:29.00692684Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 170675,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T17:02:25.442283039Z",
	            "FinishedAt": "2025-10-19T17:02:22.400022944Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/833f25ba1f3ff5e320e19ef8563dece08c130e748dd0203930bbbfe45852308c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/833f25ba1f3ff5e320e19ef8563dece08c130e748dd0203930bbbfe45852308c/hostname",
	        "HostsPath": "/var/lib/docker/containers/833f25ba1f3ff5e320e19ef8563dece08c130e748dd0203930bbbfe45852308c/hosts",
	        "LogPath": "/var/lib/docker/containers/833f25ba1f3ff5e320e19ef8563dece08c130e748dd0203930bbbfe45852308c/833f25ba1f3ff5e320e19ef8563dece08c130e748dd0203930bbbfe45852308c-json.log",
	        "Name": "/test-preload-764642",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "test-preload-764642:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "test-preload-764642",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "833f25ba1f3ff5e320e19ef8563dece08c130e748dd0203930bbbfe45852308c",
	                "LowerDir": "/var/lib/docker/overlay2/418cab87a4093d8613b55086e62f82a10dae41e20931019c3c1b5f066e7bb97c-init/diff:/var/lib/docker/overlay2/0516a6de822f96a429e000bb6523437ca93a66a6aecaba561c6abf45d0d1defe/diff",
	                "MergedDir": "/var/lib/docker/overlay2/418cab87a4093d8613b55086e62f82a10dae41e20931019c3c1b5f066e7bb97c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/418cab87a4093d8613b55086e62f82a10dae41e20931019c3c1b5f066e7bb97c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/418cab87a4093d8613b55086e62f82a10dae41e20931019c3c1b5f066e7bb97c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "test-preload-764642",
	                "Source": "/var/lib/docker/volumes/test-preload-764642/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "test-preload-764642",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "test-preload-764642",
	                "name.minikube.sigs.k8s.io": "test-preload-764642",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "705c8515f09ae06d16f9e944b8f0bbe6e7e0473d2576763d27ec7334c62a2995",
	            "SandboxKey": "/var/run/docker/netns/705c8515f09a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32958"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32959"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32962"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32960"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32961"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "test-preload-764642": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:08:fd:5c:be:48",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f0a0a900379bd370fb3ffd380a7fffce34c45a01f407033fb38fc5f258985646",
	                    "EndpointID": "74a5fc3d8f0e3324a861f1d7d8d05e7d0e35d61ef2b9123e681e36ac6d794640",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "test-preload-764642",
	                        "833f25ba1f3f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-764642 -n test-preload-764642
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-764642 -n test-preload-764642: exit status 2 (296.528954ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-764642 logs -n 25
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ multinode-026920 cp multinode-026920-m03:/home/docker/cp-test.txt multinode-026920:/home/docker/cp-test_multinode-026920-m03_multinode-026920.txt         │ multinode-026920     │ jenkins │ v1.37.0 │ 19 Oct 25 16:58 UTC │ 19 Oct 25 16:58 UTC │
	│ ssh     │ multinode-026920 ssh -n multinode-026920-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-026920     │ jenkins │ v1.37.0 │ 19 Oct 25 16:58 UTC │ 19 Oct 25 16:58 UTC │
	│ ssh     │ multinode-026920 ssh -n multinode-026920 sudo cat /home/docker/cp-test_multinode-026920-m03_multinode-026920.txt                                          │ multinode-026920     │ jenkins │ v1.37.0 │ 19 Oct 25 16:58 UTC │ 19 Oct 25 16:58 UTC │
	│ cp      │ multinode-026920 cp multinode-026920-m03:/home/docker/cp-test.txt multinode-026920-m02:/home/docker/cp-test_multinode-026920-m03_multinode-026920-m02.txt │ multinode-026920     │ jenkins │ v1.37.0 │ 19 Oct 25 16:58 UTC │ 19 Oct 25 16:58 UTC │
	│ ssh     │ multinode-026920 ssh -n multinode-026920-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-026920     │ jenkins │ v1.37.0 │ 19 Oct 25 16:58 UTC │ 19 Oct 25 16:58 UTC │
	│ ssh     │ multinode-026920 ssh -n multinode-026920-m02 sudo cat /home/docker/cp-test_multinode-026920-m03_multinode-026920-m02.txt                                  │ multinode-026920     │ jenkins │ v1.37.0 │ 19 Oct 25 16:58 UTC │ 19 Oct 25 16:58 UTC │
	│ node    │ multinode-026920 node stop m03                                                                                                                            │ multinode-026920     │ jenkins │ v1.37.0 │ 19 Oct 25 16:58 UTC │ 19 Oct 25 16:58 UTC │
	│ node    │ multinode-026920 node start m03 -v=5 --alsologtostderr                                                                                                    │ multinode-026920     │ jenkins │ v1.37.0 │ 19 Oct 25 16:58 UTC │ 19 Oct 25 16:58 UTC │
	│ node    │ list -p multinode-026920                                                                                                                                  │ multinode-026920     │ jenkins │ v1.37.0 │ 19 Oct 25 16:58 UTC │                     │
	│ stop    │ -p multinode-026920                                                                                                                                       │ multinode-026920     │ jenkins │ v1.37.0 │ 19 Oct 25 16:58 UTC │ 19 Oct 25 16:58 UTC │
	│ start   │ -p multinode-026920 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-026920     │ jenkins │ v1.37.0 │ 19 Oct 25 16:58 UTC │ 19 Oct 25 16:59 UTC │
	│ node    │ list -p multinode-026920                                                                                                                                  │ multinode-026920     │ jenkins │ v1.37.0 │ 19 Oct 25 16:59 UTC │                     │
	│ node    │ multinode-026920 node delete m03                                                                                                                          │ multinode-026920     │ jenkins │ v1.37.0 │ 19 Oct 25 16:59 UTC │ 19 Oct 25 16:59 UTC │
	│ stop    │ multinode-026920 stop                                                                                                                                     │ multinode-026920     │ jenkins │ v1.37.0 │ 19 Oct 25 16:59 UTC │ 19 Oct 25 17:00 UTC │
	│ start   │ -p multinode-026920 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio                                                          │ multinode-026920     │ jenkins │ v1.37.0 │ 19 Oct 25 17:00 UTC │ 19 Oct 25 17:00 UTC │
	│ node    │ list -p multinode-026920                                                                                                                                  │ multinode-026920     │ jenkins │ v1.37.0 │ 19 Oct 25 17:00 UTC │                     │
	│ start   │ -p multinode-026920-m02 --driver=docker  --container-runtime=crio                                                                                         │ multinode-026920-m02 │ jenkins │ v1.37.0 │ 19 Oct 25 17:00 UTC │                     │
	│ start   │ -p multinode-026920-m03 --driver=docker  --container-runtime=crio                                                                                         │ multinode-026920-m03 │ jenkins │ v1.37.0 │ 19 Oct 25 17:00 UTC │ 19 Oct 25 17:01 UTC │
	│ node    │ add -p multinode-026920                                                                                                                                   │ multinode-026920     │ jenkins │ v1.37.0 │ 19 Oct 25 17:01 UTC │                     │
	│ delete  │ -p multinode-026920-m03                                                                                                                                   │ multinode-026920-m03 │ jenkins │ v1.37.0 │ 19 Oct 25 17:01 UTC │ 19 Oct 25 17:01 UTC │
	│ delete  │ -p multinode-026920                                                                                                                                       │ multinode-026920     │ jenkins │ v1.37.0 │ 19 Oct 25 17:01 UTC │ 19 Oct 25 17:01 UTC │
	│ start   │ -p test-preload-764642 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0 │ test-preload-764642  │ jenkins │ v1.37.0 │ 19 Oct 25 17:01 UTC │ 19 Oct 25 17:02 UTC │
	│ image   │ test-preload-764642 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-764642  │ jenkins │ v1.37.0 │ 19 Oct 25 17:02 UTC │ 19 Oct 25 17:02 UTC │
	│ stop    │ -p test-preload-764642                                                                                                                                    │ test-preload-764642  │ jenkins │ v1.37.0 │ 19 Oct 25 17:02 UTC │ 19 Oct 25 17:02 UTC │
	│ start   │ -p test-preload-764642 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio                                         │ test-preload-764642  │ jenkins │ v1.37.0 │ 19 Oct 25 17:02 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 17:02:22
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 17:02:22.790163  170457 out.go:360] Setting OutFile to fd 1 ...
	I1019 17:02:22.790281  170457 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:02:22.790292  170457 out.go:374] Setting ErrFile to fd 2...
	I1019 17:02:22.790299  170457 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:02:22.790550  170457 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3731/.minikube/bin
	I1019 17:02:22.791020  170457 out.go:368] Setting JSON to false
	I1019 17:02:22.791932  170457 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2689,"bootTime":1760890654,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 17:02:22.792022  170457 start.go:143] virtualization: kvm guest
	I1019 17:02:22.794262  170457 out.go:179] * [test-preload-764642] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 17:02:22.795675  170457 notify.go:221] Checking for updates...
	I1019 17:02:22.795697  170457 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 17:02:22.796951  170457 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 17:02:22.798145  170457 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-3731/kubeconfig
	I1019 17:02:22.799300  170457 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-3731/.minikube
	I1019 17:02:22.800406  170457 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 17:02:22.801600  170457 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 17:02:22.803318  170457 config.go:182] Loaded profile config "test-preload-764642": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1019 17:02:22.804993  170457 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1019 17:02:22.806169  170457 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 17:02:22.829623  170457 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1019 17:02:22.829779  170457 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:02:22.888327  170457 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-19 17:02:22.877604145 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 17:02:22.888461  170457 docker.go:319] overlay module found
	I1019 17:02:22.890165  170457 out.go:179] * Using the docker driver based on existing profile
	I1019 17:02:22.891411  170457 start.go:309] selected driver: docker
	I1019 17:02:22.891429  170457 start.go:930] validating driver "docker" against &{Name:test-preload-764642 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-764642 Namespace:default APIServerHAVIP: APIServ
erName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:02:22.891536  170457 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 17:02:22.892266  170457 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:02:22.950376  170457 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-19 17:02:22.9409471 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 17:02:22.950656  170457 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 17:02:22.950683  170457 cni.go:84] Creating CNI manager for ""
	I1019 17:02:22.950746  170457 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:02:22.950796  170457 start.go:353] cluster config:
	{Name:test-preload-764642 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-764642 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVM
netClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:02:22.953552  170457 out.go:179] * Starting "test-preload-764642" primary control-plane node in "test-preload-764642" cluster
	I1019 17:02:22.954702  170457 cache.go:124] Beginning downloading kic base image for docker with crio
	I1019 17:02:22.956503  170457 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 17:02:22.957462  170457 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1019 17:02:22.957573  170457 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 17:02:22.977454  170457 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 17:02:22.977474  170457 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 17:02:22.983209  170457 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1019 17:02:22.983249  170457 cache.go:59] Caching tarball of preloaded images
	I1019 17:02:22.983371  170457 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1019 17:02:22.985114  170457 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I1019 17:02:22.986401  170457 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1019 17:02:23.027705  170457 preload.go:290] Got checksum from GCS API "2acdb4dde52794f2167c79dcee7507ae"
	I1019 17:02:23.027757  170457 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21683-3731/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1019 17:02:25.395540  170457 cache.go:62] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1019 17:02:25.395691  170457 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/test-preload-764642/config.json ...
	I1019 17:02:25.395907  170457 cache.go:233] Successfully downloaded all kic artifacts
	I1019 17:02:25.395943  170457 start.go:360] acquireMachinesLock for test-preload-764642: {Name:mkcf3cadf84b7ebd663f42d50c88555e2bbe85f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 17:02:25.396010  170457 start.go:364] duration metric: took 45.815µs to acquireMachinesLock for "test-preload-764642"
	I1019 17:02:25.396030  170457 start.go:96] Skipping create...Using existing machine configuration
	I1019 17:02:25.396036  170457 fix.go:54] fixHost starting: 
	I1019 17:02:25.396280  170457 cli_runner.go:164] Run: docker container inspect test-preload-764642 --format={{.State.Status}}
	I1019 17:02:25.413876  170457 fix.go:112] recreateIfNeeded on test-preload-764642: state=Stopped err=<nil>
	W1019 17:02:25.413908  170457 fix.go:138] unexpected machine state, will restart: <nil>
	I1019 17:02:25.416811  170457 out.go:252] * Restarting existing docker container for "test-preload-764642" ...
	I1019 17:02:25.416876  170457 cli_runner.go:164] Run: docker start test-preload-764642
	I1019 17:02:25.654524  170457 cli_runner.go:164] Run: docker container inspect test-preload-764642 --format={{.State.Status}}
	I1019 17:02:25.674291  170457 kic.go:430] container "test-preload-764642" state is running.
	I1019 17:02:25.674639  170457 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-764642
	I1019 17:02:25.692955  170457 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/test-preload-764642/config.json ...
	I1019 17:02:25.693223  170457 machine.go:94] provisionDockerMachine start ...
	I1019 17:02:25.693286  170457 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-764642
	I1019 17:02:25.711999  170457 main.go:143] libmachine: Using SSH client type: native
	I1019 17:02:25.712345  170457 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32958 <nil> <nil>}
	I1019 17:02:25.712364  170457 main.go:143] libmachine: About to run SSH command:
	hostname
	I1019 17:02:25.712974  170457 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46656->127.0.0.1:32958: read: connection reset by peer
	I1019 17:02:28.845426  170457 main.go:143] libmachine: SSH cmd err, output: <nil>: test-preload-764642
	
	I1019 17:02:28.845458  170457 ubuntu.go:182] provisioning hostname "test-preload-764642"
	I1019 17:02:28.845525  170457 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-764642
	I1019 17:02:28.864222  170457 main.go:143] libmachine: Using SSH client type: native
	I1019 17:02:28.864447  170457 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32958 <nil> <nil>}
	I1019 17:02:28.864461  170457 main.go:143] libmachine: About to run SSH command:
	sudo hostname test-preload-764642 && echo "test-preload-764642" | sudo tee /etc/hostname
	I1019 17:02:29.007094  170457 main.go:143] libmachine: SSH cmd err, output: <nil>: test-preload-764642
	
	I1019 17:02:29.007179  170457 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-764642
	I1019 17:02:29.025461  170457 main.go:143] libmachine: Using SSH client type: native
	I1019 17:02:29.025747  170457 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32958 <nil> <nil>}
	I1019 17:02:29.025774  170457 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-764642' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-764642/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-764642' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 17:02:29.158801  170457 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1019 17:02:29.158835  170457 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-3731/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-3731/.minikube}
	I1019 17:02:29.158867  170457 ubuntu.go:190] setting up certificates
	I1019 17:02:29.158878  170457 provision.go:84] configureAuth start
	I1019 17:02:29.158930  170457 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-764642
	I1019 17:02:29.177139  170457 provision.go:143] copyHostCerts
	I1019 17:02:29.177202  170457 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-3731/.minikube/ca.pem, removing ...
	I1019 17:02:29.177221  170457 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-3731/.minikube/ca.pem
	I1019 17:02:29.177294  170457 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-3731/.minikube/ca.pem (1082 bytes)
	I1019 17:02:29.177432  170457 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-3731/.minikube/cert.pem, removing ...
	I1019 17:02:29.177448  170457 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-3731/.minikube/cert.pem
	I1019 17:02:29.177491  170457 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-3731/.minikube/cert.pem (1123 bytes)
	I1019 17:02:29.177577  170457 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-3731/.minikube/key.pem, removing ...
	I1019 17:02:29.177587  170457 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-3731/.minikube/key.pem
	I1019 17:02:29.177624  170457 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-3731/.minikube/key.pem (1679 bytes)
	I1019 17:02:29.177695  170457 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-3731/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca-key.pem org=jenkins.test-preload-764642 san=[127.0.0.1 192.168.76.2 localhost minikube test-preload-764642]
	I1019 17:02:29.309892  170457 provision.go:177] copyRemoteCerts
	I1019 17:02:29.309948  170457 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 17:02:29.309985  170457 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-764642
	I1019 17:02:29.327548  170457 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/test-preload-764642/id_rsa Username:docker}
	I1019 17:02:29.423324  170457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 17:02:29.440958  170457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 17:02:29.457888  170457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1019 17:02:29.475255  170457 provision.go:87] duration metric: took 316.362601ms to configureAuth
	I1019 17:02:29.475288  170457 ubuntu.go:206] setting minikube options for container-runtime
	I1019 17:02:29.475452  170457 config.go:182] Loaded profile config "test-preload-764642": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1019 17:02:29.475548  170457 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-764642
	I1019 17:02:29.492917  170457 main.go:143] libmachine: Using SSH client type: native
	I1019 17:02:29.493145  170457 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32958 <nil> <nil>}
	I1019 17:02:29.493165  170457 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 17:02:29.763966  170457 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 17:02:29.763990  170457 machine.go:97] duration metric: took 4.07075091s to provisionDockerMachine
	I1019 17:02:29.764005  170457 start.go:293] postStartSetup for "test-preload-764642" (driver="docker")
	I1019 17:02:29.764019  170457 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 17:02:29.764112  170457 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 17:02:29.764172  170457 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-764642
	I1019 17:02:29.782442  170457 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/test-preload-764642/id_rsa Username:docker}
	I1019 17:02:29.878698  170457 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 17:02:29.882457  170457 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 17:02:29.882490  170457 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 17:02:29.882502  170457 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-3731/.minikube/addons for local assets ...
	I1019 17:02:29.882556  170457 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-3731/.minikube/files for local assets ...
	I1019 17:02:29.882651  170457 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-3731/.minikube/files/etc/ssl/certs/72282.pem -> 72282.pem in /etc/ssl/certs
	I1019 17:02:29.882741  170457 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 17:02:29.890680  170457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/files/etc/ssl/certs/72282.pem --> /etc/ssl/certs/72282.pem (1708 bytes)
	I1019 17:02:29.908204  170457 start.go:296] duration metric: took 144.185723ms for postStartSetup
	I1019 17:02:29.908271  170457 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 17:02:29.908321  170457 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-764642
	I1019 17:02:29.926673  170457 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/test-preload-764642/id_rsa Username:docker}
	I1019 17:02:30.019363  170457 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 17:02:30.023764  170457 fix.go:56] duration metric: took 4.627721034s for fixHost
	I1019 17:02:30.023789  170457 start.go:83] releasing machines lock for "test-preload-764642", held for 4.627765183s
	I1019 17:02:30.023862  170457 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-764642
	I1019 17:02:30.041267  170457 ssh_runner.go:195] Run: cat /version.json
	I1019 17:02:30.041302  170457 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 17:02:30.041317  170457 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-764642
	I1019 17:02:30.041352  170457 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-764642
	I1019 17:02:30.060560  170457 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/test-preload-764642/id_rsa Username:docker}
	I1019 17:02:30.061526  170457 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/test-preload-764642/id_rsa Username:docker}
	I1019 17:02:30.213779  170457 ssh_runner.go:195] Run: systemctl --version
	I1019 17:02:30.220216  170457 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 17:02:30.254648  170457 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 17:02:30.259338  170457 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 17:02:30.259413  170457 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 17:02:30.267704  170457 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1019 17:02:30.267724  170457 start.go:496] detecting cgroup driver to use...
	I1019 17:02:30.267761  170457 detect.go:190] detected "systemd" cgroup driver on host os
	I1019 17:02:30.267809  170457 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 17:02:30.282004  170457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 17:02:30.294133  170457 docker.go:218] disabling cri-docker service (if available) ...
	I1019 17:02:30.294186  170457 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 17:02:30.308400  170457 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 17:02:30.320466  170457 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 17:02:30.401991  170457 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 17:02:30.482748  170457 docker.go:234] disabling docker service ...
	I1019 17:02:30.482806  170457 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 17:02:30.497469  170457 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 17:02:30.510639  170457 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 17:02:30.592582  170457 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 17:02:30.673284  170457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 17:02:30.685585  170457 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 17:02:30.699717  170457 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1019 17:02:30.699770  170457 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:02:30.708516  170457 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1019 17:02:30.708582  170457 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:02:30.717262  170457 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:02:30.725858  170457 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:02:30.734495  170457 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 17:02:30.742481  170457 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:02:30.751450  170457 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:02:30.760126  170457 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:02:30.768671  170457 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 17:02:30.775855  170457 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 17:02:30.783289  170457 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:02:30.859327  170457 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 17:02:30.969001  170457 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 17:02:30.969086  170457 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 17:02:30.973253  170457 start.go:564] Will wait 60s for crictl version
	I1019 17:02:30.973308  170457 ssh_runner.go:195] Run: which crictl
	I1019 17:02:30.977140  170457 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 17:02:31.000870  170457 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 17:02:31.000962  170457 ssh_runner.go:195] Run: crio --version
	I1019 17:02:31.028021  170457 ssh_runner.go:195] Run: crio --version
	I1019 17:02:31.057141  170457 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.34.1 ...
	I1019 17:02:31.058535  170457 cli_runner.go:164] Run: docker network inspect test-preload-764642 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 17:02:31.075805  170457 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1019 17:02:31.079980  170457 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:02:31.090200  170457 kubeadm.go:884] updating cluster {Name:test-preload-764642 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-764642 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics
:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 17:02:31.090290  170457 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1019 17:02:31.090327  170457 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:02:31.120443  170457 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:02:31.120464  170457 crio.go:433] Images already preloaded, skipping extraction
	I1019 17:02:31.120507  170457 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:02:31.145354  170457 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:02:31.145376  170457 cache_images.go:86] Images are preloaded, skipping loading
	I1019 17:02:31.145382  170457 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.32.0 crio true true} ...
	I1019 17:02:31.145476  170457 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=test-preload-764642 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-764642 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 17:02:31.145588  170457 ssh_runner.go:195] Run: crio config
	I1019 17:02:31.188674  170457 cni.go:84] Creating CNI manager for ""
	I1019 17:02:31.188694  170457 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:02:31.188709  170457 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 17:02:31.188730  170457 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-764642 NodeName:test-preload-764642 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 17:02:31.188840  170457 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-764642"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 17:02:31.188900  170457 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1019 17:02:31.197143  170457 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 17:02:31.197212  170457 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 17:02:31.205100  170457 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I1019 17:02:31.217673  170457 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 17:02:31.230231  170457 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1019 17:02:31.242722  170457 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1019 17:02:31.246385  170457 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:02:31.256411  170457 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:02:31.333607  170457 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:02:31.358614  170457 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/test-preload-764642 for IP: 192.168.76.2
	I1019 17:02:31.358634  170457 certs.go:195] generating shared ca certs ...
	I1019 17:02:31.358648  170457 certs.go:227] acquiring lock for ca certs: {Name:mk4c4a2dfd94a54e3626e99fce4b6f5183eeaf4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:02:31.358792  170457 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-3731/.minikube/ca.key
	I1019 17:02:31.358848  170457 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-3731/.minikube/proxy-client-ca.key
	I1019 17:02:31.358861  170457 certs.go:257] generating profile certs ...
	I1019 17:02:31.358968  170457 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/test-preload-764642/client.key
	I1019 17:02:31.359020  170457 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/test-preload-764642/apiserver.key.d7385673
	I1019 17:02:31.359095  170457 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/test-preload-764642/proxy-client.key
	I1019 17:02:31.359259  170457 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/7228.pem (1338 bytes)
	W1019 17:02:31.359296  170457 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-3731/.minikube/certs/7228_empty.pem, impossibly tiny 0 bytes
	I1019 17:02:31.359302  170457 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca-key.pem (1675 bytes)
	I1019 17:02:31.359331  170457 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem (1082 bytes)
	I1019 17:02:31.359364  170457 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/cert.pem (1123 bytes)
	I1019 17:02:31.359394  170457 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/key.pem (1679 bytes)
	I1019 17:02:31.359444  170457 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/files/etc/ssl/certs/72282.pem (1708 bytes)
	I1019 17:02:31.360096  170457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 17:02:31.379228  170457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1019 17:02:31.399031  170457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 17:02:31.419871  170457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1019 17:02:31.444332  170457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/test-preload-764642/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1019 17:02:31.462166  170457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/test-preload-764642/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I1019 17:02:31.479712  170457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/test-preload-764642/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 17:02:31.496728  170457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/test-preload-764642/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1019 17:02:31.514444  170457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/files/etc/ssl/certs/72282.pem --> /usr/share/ca-certificates/72282.pem (1708 bytes)
	I1019 17:02:31.532377  170457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 17:02:31.550790  170457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/certs/7228.pem --> /usr/share/ca-certificates/7228.pem (1338 bytes)
	I1019 17:02:31.567856  170457 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 17:02:31.580677  170457 ssh_runner.go:195] Run: openssl version
	I1019 17:02:31.586938  170457 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7228.pem && ln -fs /usr/share/ca-certificates/7228.pem /etc/ssl/certs/7228.pem"
	I1019 17:02:31.595977  170457 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7228.pem
	I1019 17:02:31.599876  170457 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 16:26 /usr/share/ca-certificates/7228.pem
	I1019 17:02:31.599933  170457 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7228.pem
	I1019 17:02:31.634535  170457 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7228.pem /etc/ssl/certs/51391683.0"
	I1019 17:02:31.642831  170457 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/72282.pem && ln -fs /usr/share/ca-certificates/72282.pem /etc/ssl/certs/72282.pem"
	I1019 17:02:31.651139  170457 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/72282.pem
	I1019 17:02:31.654754  170457 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 16:26 /usr/share/ca-certificates/72282.pem
	I1019 17:02:31.654811  170457 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/72282.pem
	I1019 17:02:31.688229  170457 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/72282.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 17:02:31.696326  170457 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 17:02:31.705413  170457 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:02:31.709549  170457 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 16:21 /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:02:31.709631  170457 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:02:31.744417  170457 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 17:02:31.753062  170457 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 17:02:31.756954  170457 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1019 17:02:31.791898  170457 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1019 17:02:31.825854  170457 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1019 17:02:31.862015  170457 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1019 17:02:31.906665  170457 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1019 17:02:31.957109  170457 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1019 17:02:31.994259  170457 kubeadm.go:401] StartCluster: {Name:test-preload-764642 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-764642 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:02:31.994380  170457 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 17:02:31.994454  170457 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 17:02:32.024156  170457 cri.go:89] found id: "efcc7f9f584733beb448fd7c42f2d2dd702c7e1f67af89218a20677b80ebf7a1"
	I1019 17:02:32.024180  170457 cri.go:89] found id: "6eb1c676037c8b10318b2b1fa1ad6fb08228ef713c8948af966f5dc421e5e59b"
	I1019 17:02:32.024186  170457 cri.go:89] found id: ""
	I1019 17:02:32.024240  170457 ssh_runner.go:195] Run: sudo runc list -f json
	W1019 17:02:32.036856  170457 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:02:32Z" level=error msg="open /run/runc: no such file or directory"
	I1019 17:02:32.036947  170457 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 17:02:32.044951  170457 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1019 17:02:32.044973  170457 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1019 17:02:32.045025  170457 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1019 17:02:32.052554  170457 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1019 17:02:32.052958  170457 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-764642" does not appear in /home/jenkins/minikube-integration/21683-3731/kubeconfig
	I1019 17:02:32.053058  170457 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-3731/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-764642" cluster setting kubeconfig missing "test-preload-764642" context setting]
	I1019 17:02:32.053421  170457 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/kubeconfig: {Name:mk4cdfafbafeae33568865a60cf929c656d55c5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:02:32.053893  170457 kapi.go:59] client config for test-preload-764642: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-3731/.minikube/profiles/test-preload-764642/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-3731/.minikube/profiles/test-preload-764642/client.key", CAFile:"/home/jenkins/minikube-integration/21683-3731/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819d20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1019 17:02:32.054300  170457 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1019 17:02:32.054315  170457 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1019 17:02:32.054320  170457 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1019 17:02:32.054324  170457 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1019 17:02:32.054327  170457 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1019 17:02:32.054625  170457 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1019 17:02:32.062215  170457 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1019 17:02:32.062242  170457 kubeadm.go:602] duration metric: took 17.263069ms to restartPrimaryControlPlane
	I1019 17:02:32.062250  170457 kubeadm.go:403] duration metric: took 68.002901ms to StartCluster
	I1019 17:02:32.062263  170457 settings.go:142] acquiring lock: {Name:mk205bd8d663b4a1850184d0f589700c0d2429b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:02:32.062334  170457 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-3731/kubeconfig
	I1019 17:02:32.062893  170457 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/kubeconfig: {Name:mk4cdfafbafeae33568865a60cf929c656d55c5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:02:32.063165  170457 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 17:02:32.063234  170457 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 17:02:32.063347  170457 addons.go:70] Setting storage-provisioner=true in profile "test-preload-764642"
	I1019 17:02:32.063363  170457 addons.go:239] Setting addon storage-provisioner=true in "test-preload-764642"
	W1019 17:02:32.063372  170457 addons.go:248] addon storage-provisioner should already be in state true
	I1019 17:02:32.063388  170457 addons.go:70] Setting default-storageclass=true in profile "test-preload-764642"
	I1019 17:02:32.063404  170457 host.go:66] Checking if "test-preload-764642" exists ...
	I1019 17:02:32.063418  170457 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "test-preload-764642"
	I1019 17:02:32.063418  170457 config.go:182] Loaded profile config "test-preload-764642": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1019 17:02:32.063741  170457 cli_runner.go:164] Run: docker container inspect test-preload-764642 --format={{.State.Status}}
	I1019 17:02:32.063948  170457 cli_runner.go:164] Run: docker container inspect test-preload-764642 --format={{.State.Status}}
	I1019 17:02:32.066010  170457 out.go:179] * Verifying Kubernetes components...
	I1019 17:02:32.067215  170457 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:02:32.086098  170457 kapi.go:59] client config for test-preload-764642: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-3731/.minikube/profiles/test-preload-764642/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-3731/.minikube/profiles/test-preload-764642/client.key", CAFile:"/home/jenkins/minikube-integration/21683-3731/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819d20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1019 17:02:32.086210  170457 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 17:02:32.086500  170457 addons.go:239] Setting addon default-storageclass=true in "test-preload-764642"
	W1019 17:02:32.086523  170457 addons.go:248] addon default-storageclass should already be in state true
	I1019 17:02:32.086553  170457 host.go:66] Checking if "test-preload-764642" exists ...
	I1019 17:02:32.087092  170457 cli_runner.go:164] Run: docker container inspect test-preload-764642 --format={{.State.Status}}
	I1019 17:02:32.087826  170457 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:02:32.087848  170457 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 17:02:32.087905  170457 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-764642
	I1019 17:02:32.115136  170457 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 17:02:32.115160  170457 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 17:02:32.115219  170457 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-764642
	I1019 17:02:32.116880  170457 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/test-preload-764642/id_rsa Username:docker}
	I1019 17:02:32.139566  170457 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/test-preload-764642/id_rsa Username:docker}
	I1019 17:02:32.184456  170457 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:02:32.197803  170457 node_ready.go:35] waiting up to 6m0s for node "test-preload-764642" to be "Ready" ...
	I1019 17:02:32.223468  170457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:02:32.248036  170457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1019 17:02:42.199238  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": net/http: TLS handshake timeout
	W1019 17:02:52.200308  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": net/http: TLS handshake timeout
	I1019 17:02:52.286927  170457 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (20.063421483s)
	W1019 17:02:52.286962  170457 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:02:52.286985  170457 retry.go:31] will retry after 341.789128ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:02:52.320908  170457 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (20.072831057s)
	W1019 17:02:52.320949  170457 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:02:52.320965  170457 retry.go:31] will retry after 301.292723ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:02:52.622454  170457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1019 17:02:52.629101  170457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1019 17:03:03.745330  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": net/http: TLS handshake timeout - error from a previous attempt: read tcp 192.168.76.1:56780->192.168.76.2:8443: read: connection reset by peer
	W1019 17:03:13.746402  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": net/http: TLS handshake timeout
	I1019 17:03:13.750404  170457 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (21.127910865s)
	I1019 17:03:13.750439  170457 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (21.121311024s)
	W1019 17:03:13.750458  170457 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:03:13.750477  170457 retry.go:31] will retry after 286.161608ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	W1019 17:03:13.750441  170457 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:03:13.750498  170457 retry.go:31] will retry after 345.623154ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:03:14.037002  170457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:03:14.096716  170457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1019 17:03:15.131282  170457 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.09423771s)
	W1019 17:03:15.131321  170457 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:03:15.131335  170457 retry.go:31] will retry after 754.569562ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:03:15.131335  170457 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (1.034573163s)
	W1019 17:03:15.131363  170457 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:03:15.131385  170457 retry.go:31] will retry after 766.782295ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:03:15.886460  170457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:03:15.899149  170457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1019 17:03:15.943800  170457 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:03:15.943835  170457 retry.go:31] will retry after 992.301835ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1019 17:03:15.956419  170457 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:03:15.956448  170457 retry.go:31] will retry after 847.560285ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1019 17:03:16.199323  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	I1019 17:03:16.804695  170457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1019 17:03:16.860171  170457 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:03:16.860209  170457 retry.go:31] will retry after 1.74293922s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:03:16.936331  170457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1019 17:03:16.991653  170457 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:03:16.991684  170457 retry.go:31] will retry after 885.009476ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:03:17.876907  170457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1019 17:03:17.933974  170457 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:03:17.934012  170457 retry.go:31] will retry after 2.191270849s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:03:18.603412  170457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1019 17:03:18.659827  170457 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:03:18.659862  170457 retry.go:31] will retry after 1.527929793s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1019 17:03:18.698332  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	I1019 17:03:20.125899  170457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1019 17:03:20.181044  170457 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:03:20.181093  170457 retry.go:31] will retry after 2.351921748s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:03:20.188272  170457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1019 17:03:20.243334  170457 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:03:20.243389  170457 retry.go:31] will retry after 2.231694799s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1019 17:03:20.699385  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	I1019 17:03:22.476057  170457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1019 17:03:22.530849  170457 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:03:22.530883  170457 retry.go:31] will retry after 4.671029691s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:03:22.534040  170457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1019 17:03:22.588978  170457 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:03:22.589009  170457 retry.go:31] will retry after 2.706627471s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1019 17:03:23.198936  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:03:25.199126  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	I1019 17:03:25.296301  170457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1019 17:03:25.353652  170457 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:03:25.353684  170457 retry.go:31] will retry after 7.117580556s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:03:27.202922  170457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1019 17:03:27.259464  170457 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:03:27.259498  170457 retry.go:31] will retry after 5.610702604s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1019 17:03:27.698399  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	I1019 17:03:32.471535  170457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:03:32.870729  170457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1019 17:03:39.700429  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": net/http: TLS handshake timeout
	W1019 17:03:49.702523  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": net/http: TLS handshake timeout
	I1019 17:03:50.717308  170457 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (18.245723536s)
	W1019 17:03:50.717353  170457 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:39626->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:03:50.717356  170457 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (17.846592202s)
	I1019 17:03:50.717374  170457 retry.go:31] will retry after 13.299653801s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:39626->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	W1019 17:03:50.717377  170457 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:39638->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:03:50.717393  170457 retry.go:31] will retry after 13.66811691s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:39638->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	W1019 17:03:52.198803  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:03:54.698605  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:03:56.699130  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:03:59.198763  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:04:01.698604  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:04:03.699047  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	I1019 17:04:04.017428  170457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1019 17:04:04.074400  170457 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:04:04.074432  170457 retry.go:31] will retry after 14.584149419s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:04:04.386451  170457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1019 17:04:04.443208  170457 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:04:04.443246  170457 retry.go:31] will retry after 13.014964721s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1019 17:04:06.198511  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:04:08.698514  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	I1019 17:04:17.459296  170457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1019 17:04:18.659279  170457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1019 17:04:20.700121  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": net/http: TLS handshake timeout
	W1019 17:04:30.701120  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": net/http: TLS handshake timeout
	I1019 17:04:31.957202  170457 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (14.497866675s)
	W1019 17:04:31.957235  170457 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45624->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 17:04:31.957288  170457 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (13.29796714s)
	W1019 17:04:31.957321  170457 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45630->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	W1019 17:04:31.957349  170457 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45624->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1019 17:04:31.957390  170457 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45630->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1019 17:04:31.960038  170457 out.go:179] * Enabled addons: 
	I1019 17:04:31.961638  170457 addons.go:515] duration metric: took 1m59.898405376s for enable addons: enabled=[]
	W1019 17:04:33.198456  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:04:35.198885  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:04:37.698620  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:04:39.699468  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:04:42.198999  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:04:44.698806  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:04:46.699123  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:04:49.199135  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:04:51.698743  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:04:54.198578  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:04:56.198757  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:04:58.698512  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:05:00.699112  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:05:03.198810  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:05:05.698669  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:05:07.699357  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:05:10.199112  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:05:12.698603  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:05:15.198585  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:05:17.199427  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:05:27.703495  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": net/http: TLS handshake timeout
	W1019 17:05:37.708540  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": net/http: TLS handshake timeout
	W1019 17:05:40.198602  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:05:42.199050  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:05:44.698759  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:05:47.198542  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:05:49.199372  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:05:51.698959  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:05:54.198757  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:05:56.698561  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:05:58.699219  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:06:01.198809  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:06:03.698748  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:06:06.198463  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:06:08.198819  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:06:10.698519  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:06:12.699218  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:06:15.198651  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:06:17.199196  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:06:19.698929  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:06:22.198864  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:06:24.698584  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:06:26.699032  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:06:29.198693  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:06:31.199140  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:06:33.698327  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:06:35.699407  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:06:38.198625  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:06:40.698457  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:06:42.698784  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:06:44.699452  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:06:47.198517  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:06:49.198625  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:06:51.698489  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:06:53.699131  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:06:55.699294  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:06:57.699442  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:07:00.199478  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:07:02.698593  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:07:14.700414  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": net/http: TLS handshake timeout
	W1019 17:07:24.702559  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": net/http: TLS handshake timeout
	W1019 17:07:27.199431  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:07:29.698568  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:07:31.699150  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:07:34.198803  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:07:36.698567  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:07:38.699168  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:07:41.198905  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:07:43.698700  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:07:45.699444  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:07:48.199019  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:07:50.698652  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:07:52.699248  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:07:55.198843  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:07:57.698602  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:07:59.699332  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:08:02.198827  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:08:04.698364  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:08:06.698641  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:08:08.699103  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:08:11.198580  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:08:13.198981  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:08:15.698607  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:08:17.699205  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:08:20.198713  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:08:22.698658  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:08:25.198631  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:08:27.698678  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	W1019 17:08:30.198507  170457 node_ready.go:55] error getting node "test-preload-764642" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-764642": dial tcp 192.168.76.2:8443: connect: connection refused
	I1019 17:08:32.198222  170457 node_ready.go:38] duration metric: took 6m0.000376108s for node "test-preload-764642" to be "Ready" ...
	I1019 17:08:32.200485  170457 out.go:203] 
	W1019 17:08:32.202127  170457 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1019 17:08:32.202150  170457 out.go:285] * 
	W1019 17:08:32.203967  170457 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 17:08:32.205697  170457 out.go:203] 
	
	
	==> CRI-O <==
	Oct 19 17:05:17 test-preload-764642 crio[546]: time="2025-10-19T17:05:17.453993976Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:05:17 test-preload-764642 crio[546]: time="2025-10-19T17:05:17.48195175Z" level=info msg="Created container 84bd9660625ddc1b655d859d5d6c988fd14a39591803afc29775ba0150df0262: kube-system/kube-apiserver-test-preload-764642/kube-apiserver" id=e2575179-179c-43f7-9562-2a790912d380 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:05:17 test-preload-764642 crio[546]: time="2025-10-19T17:05:17.482603423Z" level=info msg="Starting container: 84bd9660625ddc1b655d859d5d6c988fd14a39591803afc29775ba0150df0262" id=ce407fd2-e244-4ea7-bef6-d9301a6dbce8 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 17:05:17 test-preload-764642 crio[546]: time="2025-10-19T17:05:17.485255229Z" level=info msg="Started container" PID=1307 containerID=84bd9660625ddc1b655d859d5d6c988fd14a39591803afc29775ba0150df0262 description=kube-system/kube-apiserver-test-preload-764642/kube-apiserver id=ce407fd2-e244-4ea7-bef6-d9301a6dbce8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e2c3c01ffa6a16a62b7747085fde7b33abe691536305e1e368ed533759d5043b
	Oct 19 17:05:38 test-preload-764642 crio[546]: time="2025-10-19T17:05:38.815059061Z" level=info msg="Removing container: 9f6e4d45e9338b63c52b469ddcf5c1c9c1e9fab2ac2a921625b7d367f424a887" id=16dda022-d010-467b-ad37-c6c87df767b0 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 17:05:38 test-preload-764642 crio[546]: time="2025-10-19T17:05:38.824185163Z" level=info msg="Removed container 9f6e4d45e9338b63c52b469ddcf5c1c9c1e9fab2ac2a921625b7d367f424a887: kube-system/kube-apiserver-test-preload-764642/kube-apiserver" id=16dda022-d010-467b-ad37-c6c87df767b0 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 17:05:45 test-preload-764642 crio[546]: time="2025-10-19T17:05:45.583711347Z" level=info msg="createCtr: deleting container c887c86ba4e27bc6a94a4b7fa177ca540947d63b430b62511ad92bc675ea91e2 from storage" id=8d0fae82-5526-4796-9a98-cc098f0f27b9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:05:45 test-preload-764642 crio[546]: time="2025-10-19T17:05:45.583742227Z" level=info msg="createCtr: deleting container aceb600fd49d3bad2e994095bef0eac7421bec87b828f2468649ce2d986989e7 from storage" id=ffd759bf-a2c2-4cb7-86da-1ec854638814 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:05:45 test-preload-764642 crio[546]: time="2025-10-19T17:05:45.5842512Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/f07a655626d0b45e81817619f1ceddefa872f420d18a12023f8f9ad400c4fb83/merged\": directory not empty" id=8d0fae82-5526-4796-9a98-cc098f0f27b9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:05:45 test-preload-764642 crio[546]: time="2025-10-19T17:05:45.584466082Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/6240400b61ca8d7e5ccd77ea3a1524f4e8f868035c5dfd51d987a73dd316e8db/merged\": directory not empty" id=ffd759bf-a2c2-4cb7-86da-1ec854638814 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:07:04 test-preload-764642 crio[546]: time="2025-10-19T17:07:04.446140102Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.32.0" id=5da15977-662b-411e-b944-87b412a3b4ab name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:07:04 test-preload-764642 crio[546]: time="2025-10-19T17:07:04.447142283Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.32.0" id=30e1b227-baec-47eb-8c44-e4a2ace01027 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:07:04 test-preload-764642 crio[546]: time="2025-10-19T17:07:04.448380163Z" level=info msg="Creating container: kube-system/kube-apiserver-test-preload-764642/kube-apiserver" id=a7ea4297-8ca7-4c09-a279-6352ba5088cb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:07:04 test-preload-764642 crio[546]: time="2025-10-19T17:07:04.448668834Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:07:04 test-preload-764642 crio[546]: time="2025-10-19T17:07:04.452869875Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:07:04 test-preload-764642 crio[546]: time="2025-10-19T17:07:04.45345416Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:07:04 test-preload-764642 crio[546]: time="2025-10-19T17:07:04.483105226Z" level=info msg="Created container dffe54a575ebdc2f55b899512f6dc598bb9568bd437f7a06bb603bf881060fa6: kube-system/kube-apiserver-test-preload-764642/kube-apiserver" id=a7ea4297-8ca7-4c09-a279-6352ba5088cb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:07:04 test-preload-764642 crio[546]: time="2025-10-19T17:07:04.483784479Z" level=info msg="Starting container: dffe54a575ebdc2f55b899512f6dc598bb9568bd437f7a06bb603bf881060fa6" id=95e79157-1e75-4698-8e4a-a31abf9092a8 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 17:07:04 test-preload-764642 crio[546]: time="2025-10-19T17:07:04.485756163Z" level=info msg="Started container" PID=1333 containerID=dffe54a575ebdc2f55b899512f6dc598bb9568bd437f7a06bb603bf881060fa6 description=kube-system/kube-apiserver-test-preload-764642/kube-apiserver id=95e79157-1e75-4698-8e4a-a31abf9092a8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e2c3c01ffa6a16a62b7747085fde7b33abe691536305e1e368ed533759d5043b
	Oct 19 17:07:22 test-preload-764642 crio[546]: time="2025-10-19T17:07:22.89492257Z" level=info msg="createCtr: deleting container aceb600fd49d3bad2e994095bef0eac7421bec87b828f2468649ce2d986989e7 from storage" id=ffd759bf-a2c2-4cb7-86da-1ec854638814 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:07:22 test-preload-764642 crio[546]: time="2025-10-19T17:07:22.894993072Z" level=info msg="createCtr: deleting container c887c86ba4e27bc6a94a4b7fa177ca540947d63b430b62511ad92bc675ea91e2 from storage" id=8d0fae82-5526-4796-9a98-cc098f0f27b9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:07:22 test-preload-764642 crio[546]: time="2025-10-19T17:07:22.89535681Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/6240400b61ca8d7e5ccd77ea3a1524f4e8f868035c5dfd51d987a73dd316e8db/merged\": directory not empty" id=ffd759bf-a2c2-4cb7-86da-1ec854638814 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:07:22 test-preload-764642 crio[546]: time="2025-10-19T17:07:22.895561013Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/f07a655626d0b45e81817619f1ceddefa872f420d18a12023f8f9ad400c4fb83/merged\": directory not empty" id=8d0fae82-5526-4796-9a98-cc098f0f27b9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:07:25 test-preload-764642 crio[546]: time="2025-10-19T17:07:25.008259286Z" level=info msg="Removing container: 84bd9660625ddc1b655d859d5d6c988fd14a39591803afc29775ba0150df0262" id=a60af79d-4412-4414-8924-45b8337cbfaa name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 17:07:25 test-preload-764642 crio[546]: time="2025-10-19T17:07:25.017694777Z" level=info msg="Removed container 84bd9660625ddc1b655d859d5d6c988fd14a39591803afc29775ba0150df0262: kube-system/kube-apiserver-test-preload-764642/kube-apiserver" id=a60af79d-4412-4414-8924-45b8337cbfaa name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                ATTEMPT             POD ID              POD                                  NAMESPACE
	dffe54a575ebd       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   About a minute ago   Exited              kube-apiserver      6                   e2c3c01ffa6a1       kube-apiserver-test-preload-764642   kube-system
	6eb1c676037c8       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   6 minutes ago        Running             kube-scheduler      1                   86263e163bc2d       kube-scheduler-test-preload-764642   kube-system
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.102617] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027869] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.458770] kauditd_printk_skb: 47 callbacks suppressed
	[Oct19 16:23] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.054620] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023908] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023878] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023848] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023943] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +2.047764] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +4.031517] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +8.127172] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[ +16.382276] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[Oct19 16:24] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	
	
	==> kernel <==
	 17:08:33 up 50 min,  0 user,  load average: 0.01, 0.34, 0.61
	Linux test-preload-764642 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [dffe54a575ebdc2f55b899512f6dc598bb9568bd437f7a06bb603bf881060fa6] <==
	I1019 17:07:04.533144       1 server.go:145] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1019 17:07:04.835449       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 17:07:04.835449       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I1019 17:07:04.836351       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I1019 17:07:04.844376       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1019 17:07:04.851028       1 plugins.go:157] Loaded 13 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1019 17:07:04.851054       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1019 17:07:04.851306       1 instance.go:233] Using reconciler: lease
	W1019 17:07:04.852312       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 17:07:05.836171       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 17:07:05.836236       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 17:07:05.853348       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 17:07:07.235486       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 17:07:07.252544       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 17:07:07.667875       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 17:07:09.553048       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 17:07:10.259130       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 17:07:10.631455       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 17:07:13.702454       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 17:07:14.423554       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 17:07:14.759279       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 17:07:20.552806       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 17:07:20.849706       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 17:07:21.368559       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F1019 17:07:24.852634       1 instance.go:226] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-scheduler [6eb1c676037c8b10318b2b1fa1ad6fb08228ef713c8948af966f5dc421e5e59b] <==
	E1019 17:08:07.372681       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.76.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError"
	W1019 17:08:07.789743       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E1019 17:08:07.789818       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError"
	W1019 17:08:09.755545       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.76.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E1019 17:08:09.755616       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.76.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError"
	W1019 17:08:10.007656       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.76.2:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E1019 17:08:10.007724       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.76.2:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError"
	W1019 17:08:12.437341       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E1019 17:08:12.437421       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError"
	W1019 17:08:14.191822       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.76.2:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E1019 17:08:14.191891       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.76.2:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError"
	W1019 17:08:15.356279       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.76.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E1019 17:08:15.356350       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.76.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError"
	W1019 17:08:15.562015       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.76.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E1019 17:08:15.562114       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.76.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError"
	W1019 17:08:17.730570       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.76.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E1019 17:08:17.730643       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.76.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError"
	W1019 17:08:22.787572       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E1019 17:08:22.787638       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError"
	W1019 17:08:26.375554       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E1019 17:08:26.375625       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError"
	W1019 17:08:30.413336       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.76.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E1019 17:08:30.413403       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.76.2:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError"
	W1019 17:08:33.076255       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.76.2:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E1019 17:08:33.076312       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.76.2:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError"
	
	
	==> kubelet <==
	Oct 19 17:08:07 test-preload-764642 kubelet[706]: E1019 17:08:07.987213     706 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/test-preload-764642?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused" interval="7s"
	Oct 19 17:08:09 test-preload-764642 kubelet[706]: E1019 17:08:09.445258     706 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"test-preload-764642\" not found" node="test-preload-764642"
	Oct 19 17:08:09 test-preload-764642 kubelet[706]: I1019 17:08:09.445367     706 scope.go:117] "RemoveContainer" containerID="dffe54a575ebdc2f55b899512f6dc598bb9568bd437f7a06bb603bf881060fa6"
	Oct 19 17:08:09 test-preload-764642 kubelet[706]: E1019 17:08:09.445558     706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-test-preload-764642_kube-system(0a83f6b625d6cb114c6c9fcf31205d24)\"" pod="kube-system/kube-apiserver-test-preload-764642" podUID="0a83f6b625d6cb114c6c9fcf31205d24"
	Oct 19 17:08:09 test-preload-764642 kubelet[706]: W1019 17:08:09.832256     706 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	Oct 19 17:08:09 test-preload-764642 kubelet[706]: E1019 17:08:09.832339     706 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError"
	Oct 19 17:08:11 test-preload-764642 kubelet[706]: E1019 17:08:11.464880     706 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"test-preload-764642\" not found"
	Oct 19 17:08:12 test-preload-764642 kubelet[706]: W1019 17:08:12.687376     706 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dtest-preload-764642&limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	Oct 19 17:08:12 test-preload-764642 kubelet[706]: E1019 17:08:12.687458     706 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dtest-preload-764642&limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError"
	Oct 19 17:08:14 test-preload-764642 kubelet[706]: I1019 17:08:14.918717     706 kubelet_node_status.go:76] "Attempting to register node" node="test-preload-764642"
	Oct 19 17:08:14 test-preload-764642 kubelet[706]: E1019 17:08:14.919283     706 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="test-preload-764642"
	Oct 19 17:08:14 test-preload-764642 kubelet[706]: E1019 17:08:14.988617     706 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/test-preload-764642?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused" interval="7s"
	Oct 19 17:08:16 test-preload-764642 kubelet[706]: E1019 17:08:16.010727     706 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.76.2:8443: connect: connection refused" event="&Event{ObjectMeta:{test-preload-764642.186ff32c6beacb8e  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:test-preload-764642,UID:test-preload-764642,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node test-preload-764642 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:test-preload-764642,},FirstTimestamp:2025-10-19 17:02:31.439813518 +0000 UTC m=+0.079534657,LastTimestamp:2025-10-19 17:02:31.439813518 +0000 UTC m=+0.079534657,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstan
ce:test-preload-764642,}"
	Oct 19 17:08:21 test-preload-764642 kubelet[706]: E1019 17:08:21.465978     706 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"test-preload-764642\" not found"
	Oct 19 17:08:21 test-preload-764642 kubelet[706]: I1019 17:08:21.920786     706 kubelet_node_status.go:76] "Attempting to register node" node="test-preload-764642"
	Oct 19 17:08:21 test-preload-764642 kubelet[706]: E1019 17:08:21.921219     706 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="test-preload-764642"
	Oct 19 17:08:21 test-preload-764642 kubelet[706]: E1019 17:08:21.990180     706 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/test-preload-764642?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused" interval="7s"
	Oct 19 17:08:23 test-preload-764642 kubelet[706]: E1019 17:08:23.445111     706 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"test-preload-764642\" not found" node="test-preload-764642"
	Oct 19 17:08:23 test-preload-764642 kubelet[706]: I1019 17:08:23.445217     706 scope.go:117] "RemoveContainer" containerID="dffe54a575ebdc2f55b899512f6dc598bb9568bd437f7a06bb603bf881060fa6"
	Oct 19 17:08:23 test-preload-764642 kubelet[706]: E1019 17:08:23.445359     706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-test-preload-764642_kube-system(0a83f6b625d6cb114c6c9fcf31205d24)\"" pod="kube-system/kube-apiserver-test-preload-764642" podUID="0a83f6b625d6cb114c6c9fcf31205d24"
	Oct 19 17:08:26 test-preload-764642 kubelet[706]: E1019 17:08:26.012034     706 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.76.2:8443: connect: connection refused" event="&Event{ObjectMeta:{test-preload-764642.186ff32c6beacb8e  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:test-preload-764642,UID:test-preload-764642,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node test-preload-764642 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:test-preload-764642,},FirstTimestamp:2025-10-19 17:02:31.439813518 +0000 UTC m=+0.079534657,LastTimestamp:2025-10-19 17:02:31.439813518 +0000 UTC m=+0.079534657,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstan
ce:test-preload-764642,}"
	Oct 19 17:08:28 test-preload-764642 kubelet[706]: I1019 17:08:28.922810     706 kubelet_node_status.go:76] "Attempting to register node" node="test-preload-764642"
	Oct 19 17:08:28 test-preload-764642 kubelet[706]: E1019 17:08:28.923219     706 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="test-preload-764642"
	Oct 19 17:08:28 test-preload-764642 kubelet[706]: E1019 17:08:28.990926     706 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/test-preload-764642?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused" interval="7s"
	Oct 19 17:08:31 test-preload-764642 kubelet[706]: E1019 17:08:31.466369     706 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"test-preload-764642\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-764642 -n test-preload-764642
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-764642 -n test-preload-764642: exit status 2 (300.380044ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "test-preload-764642" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-764642" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-764642
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-764642: (2.372501358s)
--- FAIL: TestPreload (427.89s)

                                                
                                    
x
+
TestPause/serial/Pause (6.5s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-111127 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-111127 --alsologtostderr -v=5: exit status 80 (2.559689944s)

                                                
                                                
-- stdout --
	* Pausing node pause-111127 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 17:11:27.119818  196352 out.go:360] Setting OutFile to fd 1 ...
	I1019 17:11:27.119955  196352 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:11:27.119964  196352 out.go:374] Setting ErrFile to fd 2...
	I1019 17:11:27.119971  196352 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:11:27.120305  196352 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3731/.minikube/bin
	I1019 17:11:27.120601  196352 out.go:368] Setting JSON to false
	I1019 17:11:27.120623  196352 mustload.go:66] Loading cluster: pause-111127
	I1019 17:11:27.121183  196352 config.go:182] Loaded profile config "pause-111127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:11:27.121807  196352 cli_runner.go:164] Run: docker container inspect pause-111127 --format={{.State.Status}}
	I1019 17:11:27.145997  196352 host.go:66] Checking if "pause-111127" exists ...
	I1019 17:11:27.147193  196352 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:11:27.226570  196352 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:66 SystemTime:2025-10-19 17:11:27.213527506 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 17:11:27.227415  196352 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-111127 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1019 17:11:27.234835  196352 out.go:179] * Pausing node pause-111127 ... 
	I1019 17:11:27.237121  196352 host.go:66] Checking if "pause-111127" exists ...
	I1019 17:11:27.237471  196352 ssh_runner.go:195] Run: systemctl --version
	I1019 17:11:27.237552  196352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-111127
	I1019 17:11:27.264363  196352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32983 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/pause-111127/id_rsa Username:docker}
	I1019 17:11:27.366095  196352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:11:27.382387  196352 pause.go:52] kubelet running: true
	I1019 17:11:27.382452  196352 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 17:11:27.540479  196352 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 17:11:27.540585  196352 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 17:11:27.628700  196352 cri.go:89] found id: "1efa543c20e1f9a794164f98abafe4a7003b8d780f71eeeb2eb339f73cdfcae4"
	I1019 17:11:27.628723  196352 cri.go:89] found id: "c42c8c02b9578597da919a7a18844a1559ae9ecefa496738a94be7df28c8ebb1"
	I1019 17:11:27.628728  196352 cri.go:89] found id: "6338abe7acb046a99cf86a6f0b9ae343154297b2fb9dca4f56e40933b5f37809"
	I1019 17:11:27.628732  196352 cri.go:89] found id: "44e8bdeb3906c41a9abacf791748f69b481ae82adc9eda27359692fb4ceb0d11"
	I1019 17:11:27.628743  196352 cri.go:89] found id: "92b343f39375fdd6258e52106199a7b0f14eff1dfffdd90a6107f8e1107f2c9f"
	I1019 17:11:27.628747  196352 cri.go:89] found id: "c0703092dc529da4d2a824e54e41dc2e2df1e9193b6d185904e9a7bffe3a0905"
	I1019 17:11:27.628751  196352 cri.go:89] found id: "265e375056bed11db6c2a6e126a8448a0c1f620e6cce85eb3e05dfc7a03d9b2c"
	I1019 17:11:27.628755  196352 cri.go:89] found id: ""
	I1019 17:11:27.628798  196352 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 17:11:27.642911  196352 retry.go:31] will retry after 300.882574ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:11:27Z" level=error msg="open /run/runc: no such file or directory"
	I1019 17:11:27.944301  196352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:11:27.959859  196352 pause.go:52] kubelet running: false
	I1019 17:11:27.959917  196352 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 17:11:28.092029  196352 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 17:11:28.092154  196352 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 17:11:28.177210  196352 cri.go:89] found id: "1efa543c20e1f9a794164f98abafe4a7003b8d780f71eeeb2eb339f73cdfcae4"
	I1019 17:11:28.177229  196352 cri.go:89] found id: "c42c8c02b9578597da919a7a18844a1559ae9ecefa496738a94be7df28c8ebb1"
	I1019 17:11:28.177233  196352 cri.go:89] found id: "6338abe7acb046a99cf86a6f0b9ae343154297b2fb9dca4f56e40933b5f37809"
	I1019 17:11:28.177240  196352 cri.go:89] found id: "44e8bdeb3906c41a9abacf791748f69b481ae82adc9eda27359692fb4ceb0d11"
	I1019 17:11:28.177243  196352 cri.go:89] found id: "92b343f39375fdd6258e52106199a7b0f14eff1dfffdd90a6107f8e1107f2c9f"
	I1019 17:11:28.177246  196352 cri.go:89] found id: "c0703092dc529da4d2a824e54e41dc2e2df1e9193b6d185904e9a7bffe3a0905"
	I1019 17:11:28.177249  196352 cri.go:89] found id: "265e375056bed11db6c2a6e126a8448a0c1f620e6cce85eb3e05dfc7a03d9b2c"
	I1019 17:11:28.177253  196352 cri.go:89] found id: ""
	I1019 17:11:28.177299  196352 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 17:11:28.190822  196352 retry.go:31] will retry after 283.641127ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:11:28Z" level=error msg="open /run/runc: no such file or directory"
	I1019 17:11:28.475262  196352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:11:28.490512  196352 pause.go:52] kubelet running: false
	I1019 17:11:28.490576  196352 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 17:11:28.630839  196352 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 17:11:28.630962  196352 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 17:11:28.713979  196352 cri.go:89] found id: "1efa543c20e1f9a794164f98abafe4a7003b8d780f71eeeb2eb339f73cdfcae4"
	I1019 17:11:28.714002  196352 cri.go:89] found id: "c42c8c02b9578597da919a7a18844a1559ae9ecefa496738a94be7df28c8ebb1"
	I1019 17:11:28.714017  196352 cri.go:89] found id: "6338abe7acb046a99cf86a6f0b9ae343154297b2fb9dca4f56e40933b5f37809"
	I1019 17:11:28.714022  196352 cri.go:89] found id: "44e8bdeb3906c41a9abacf791748f69b481ae82adc9eda27359692fb4ceb0d11"
	I1019 17:11:28.714026  196352 cri.go:89] found id: "92b343f39375fdd6258e52106199a7b0f14eff1dfffdd90a6107f8e1107f2c9f"
	I1019 17:11:28.714030  196352 cri.go:89] found id: "c0703092dc529da4d2a824e54e41dc2e2df1e9193b6d185904e9a7bffe3a0905"
	I1019 17:11:28.714034  196352 cri.go:89] found id: "265e375056bed11db6c2a6e126a8448a0c1f620e6cce85eb3e05dfc7a03d9b2c"
	I1019 17:11:28.714042  196352 cri.go:89] found id: ""
	I1019 17:11:28.714112  196352 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 17:11:28.728422  196352 retry.go:31] will retry after 566.866267ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:11:28Z" level=error msg="open /run/runc: no such file or directory"
	I1019 17:11:29.296410  196352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:11:29.320540  196352 pause.go:52] kubelet running: false
	I1019 17:11:29.320718  196352 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 17:11:29.488996  196352 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 17:11:29.489126  196352 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 17:11:29.579843  196352 cri.go:89] found id: "1efa543c20e1f9a794164f98abafe4a7003b8d780f71eeeb2eb339f73cdfcae4"
	I1019 17:11:29.579870  196352 cri.go:89] found id: "c42c8c02b9578597da919a7a18844a1559ae9ecefa496738a94be7df28c8ebb1"
	I1019 17:11:29.579876  196352 cri.go:89] found id: "6338abe7acb046a99cf86a6f0b9ae343154297b2fb9dca4f56e40933b5f37809"
	I1019 17:11:29.579882  196352 cri.go:89] found id: "44e8bdeb3906c41a9abacf791748f69b481ae82adc9eda27359692fb4ceb0d11"
	I1019 17:11:29.579888  196352 cri.go:89] found id: "92b343f39375fdd6258e52106199a7b0f14eff1dfffdd90a6107f8e1107f2c9f"
	I1019 17:11:29.579893  196352 cri.go:89] found id: "c0703092dc529da4d2a824e54e41dc2e2df1e9193b6d185904e9a7bffe3a0905"
	I1019 17:11:29.579897  196352 cri.go:89] found id: "265e375056bed11db6c2a6e126a8448a0c1f620e6cce85eb3e05dfc7a03d9b2c"
	I1019 17:11:29.579901  196352 cri.go:89] found id: ""
	I1019 17:11:29.579944  196352 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 17:11:29.599659  196352 out.go:203] 
	W1019 17:11:29.601445  196352 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:11:29Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:11:29Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 17:11:29.601467  196352 out.go:285] * 
	* 
	W1019 17:11:29.607464  196352 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 17:11:29.610577  196352 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-111127 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-111127
helpers_test.go:243: (dbg) docker inspect pause-111127:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0dd616257780453b85409ae3f8ea9e45a8025442459515719e8a1f5fd00646c2",
	        "Created": "2025-10-19T17:10:42.581887371Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 182748,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T17:10:42.635672541Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/0dd616257780453b85409ae3f8ea9e45a8025442459515719e8a1f5fd00646c2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0dd616257780453b85409ae3f8ea9e45a8025442459515719e8a1f5fd00646c2/hostname",
	        "HostsPath": "/var/lib/docker/containers/0dd616257780453b85409ae3f8ea9e45a8025442459515719e8a1f5fd00646c2/hosts",
	        "LogPath": "/var/lib/docker/containers/0dd616257780453b85409ae3f8ea9e45a8025442459515719e8a1f5fd00646c2/0dd616257780453b85409ae3f8ea9e45a8025442459515719e8a1f5fd00646c2-json.log",
	        "Name": "/pause-111127",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "pause-111127:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-111127",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0dd616257780453b85409ae3f8ea9e45a8025442459515719e8a1f5fd00646c2",
	                "LowerDir": "/var/lib/docker/overlay2/d32a2042c8290958da48779dc06f77710819f8861ff6346be8e586b6f283c1be-init/diff:/var/lib/docker/overlay2/0516a6de822f96a429e000bb6523437ca93a66a6aecaba561c6abf45d0d1defe/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d32a2042c8290958da48779dc06f77710819f8861ff6346be8e586b6f283c1be/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d32a2042c8290958da48779dc06f77710819f8861ff6346be8e586b6f283c1be/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d32a2042c8290958da48779dc06f77710819f8861ff6346be8e586b6f283c1be/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-111127",
	                "Source": "/var/lib/docker/volumes/pause-111127/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-111127",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-111127",
	                "name.minikube.sigs.k8s.io": "pause-111127",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "db65f2e6ee23ba8c46277346351d0a39b8c08d0b3d1dc5f26629a086fd55af36",
	            "SandboxKey": "/var/run/docker/netns/db65f2e6ee23",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32983"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32984"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32987"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32985"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32986"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-111127": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ce:fc:ec:5a:c3:8c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "864ac5e1e6911a59c8edf516002d63887bbb913fe779feb67b767ddfe9621296",
	                    "EndpointID": "7a0fbb5c0903427a9268fb0c5641ef080d89a79c6c684369697833272b60cb59",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-111127",
	                        "0dd616257780"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-111127 -n pause-111127
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-111127 -n pause-111127: exit status 2 (362.835056ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-111127 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-111127 logs -n 25: (1.032590614s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────┬──────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                     ARGS                                      │   PROFILE    │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────┼──────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ ssh     │ -p false-624324 sudo cat /etc/hosts                                           │ false-624324 │ jenkins │ v1.37.0 │ 19 Oct 25 17:11 UTC │          │
	│ ssh     │ -p false-624324 sudo cat /etc/resolv.conf                                     │ false-624324 │ jenkins │ v1.37.0 │ 19 Oct 25 17:11 UTC │          │
	│ ssh     │ -p false-624324 sudo crictl pods                                              │ false-624324 │ jenkins │ v1.37.0 │ 19 Oct 25 17:11 UTC │          │
	│ ssh     │ -p false-624324 sudo crictl ps --all                                          │ false-624324 │ jenkins │ v1.37.0 │ 19 Oct 25 17:11 UTC │          │
	│ ssh     │ -p false-624324 sudo find /etc/cni -type f -exec sh -c 'echo {}; cat {}' \;   │ false-624324 │ jenkins │ v1.37.0 │ 19 Oct 25 17:11 UTC │          │
	│ ssh     │ -p false-624324 sudo ip a s                                                   │ false-624324 │ jenkins │ v1.37.0 │ 19 Oct 25 17:11 UTC │          │
	│ ssh     │ -p false-624324 sudo ip r s                                                   │ false-624324 │ jenkins │ v1.37.0 │ 19 Oct 25 17:11 UTC │          │
	│ ssh     │ -p false-624324 sudo iptables-save                                            │ false-624324 │ jenkins │ v1.37.0 │ 19 Oct 25 17:11 UTC │          │
	│ ssh     │ -p false-624324 sudo iptables -t nat -L -n -v                                 │ false-624324 │ jenkins │ v1.37.0 │ 19 Oct 25 17:11 UTC │          │
	│ ssh     │ -p false-624324 sudo systemctl status kubelet --all --full --no-pager         │ false-624324 │ jenkins │ v1.37.0 │ 19 Oct 25 17:11 UTC │          │
	│ ssh     │ -p false-624324 sudo systemctl cat kubelet --no-pager                         │ false-624324 │ jenkins │ v1.37.0 │ 19 Oct 25 17:11 UTC │          │
	│ ssh     │ -p false-624324 sudo journalctl -xeu kubelet --all --full --no-pager          │ false-624324 │ jenkins │ v1.37.0 │ 19 Oct 25 17:11 UTC │          │
	│ ssh     │ -p false-624324 sudo cat /etc/kubernetes/kubelet.conf                         │ false-624324 │ jenkins │ v1.37.0 │ 19 Oct 25 17:11 UTC │          │
	│ ssh     │ -p false-624324 sudo cat /var/lib/kubelet/config.yaml                         │ false-624324 │ jenkins │ v1.37.0 │ 19 Oct 25 17:11 UTC │          │
	│ ssh     │ -p false-624324 sudo systemctl status docker --all --full --no-pager          │ false-624324 │ jenkins │ v1.37.0 │ 19 Oct 25 17:11 UTC │          │
	│ ssh     │ -p false-624324 sudo systemctl cat docker --no-pager                          │ false-624324 │ jenkins │ v1.37.0 │ 19 Oct 25 17:11 UTC │          │
	│ ssh     │ -p false-624324 sudo cat /etc/docker/daemon.json                              │ false-624324 │ jenkins │ v1.37.0 │ 19 Oct 25 17:11 UTC │          │
	│ ssh     │ -p false-624324 sudo docker system info                                       │ false-624324 │ jenkins │ v1.37.0 │ 19 Oct 25 17:11 UTC │          │
	│ ssh     │ -p false-624324 sudo systemctl status cri-docker --all --full --no-pager      │ false-624324 │ jenkins │ v1.37.0 │ 19 Oct 25 17:11 UTC │          │
	│ ssh     │ -p false-624324 sudo systemctl cat cri-docker --no-pager                      │ false-624324 │ jenkins │ v1.37.0 │ 19 Oct 25 17:11 UTC │          │
	│ ssh     │ -p false-624324 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ false-624324 │ jenkins │ v1.37.0 │ 19 Oct 25 17:11 UTC │          │
	│ ssh     │ -p false-624324 sudo cat /usr/lib/systemd/system/cri-docker.service           │ false-624324 │ jenkins │ v1.37.0 │ 19 Oct 25 17:11 UTC │          │
	│ ssh     │ -p false-624324 sudo cri-dockerd --version                                    │ false-624324 │ jenkins │ v1.37.0 │ 19 Oct 25 17:11 UTC │          │
	│ ssh     │ -p false-624324 sudo systemctl status containerd --all --full --no-pager      │ false-624324 │ jenkins │ v1.37.0 │ 19 Oct 25 17:11 UTC │          │
	│ ssh     │ -p false-624324 sudo systemctl cat containerd --no-pager                      │ false-624324 │ jenkins │ v1.37.0 │ 19 Oct 25 17:11 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────┴──────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 17:11:27
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 17:11:27.504729  196618 out.go:360] Setting OutFile to fd 1 ...
	I1019 17:11:27.504825  196618 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:11:27.504829  196618 out.go:374] Setting ErrFile to fd 2...
	I1019 17:11:27.504834  196618 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:11:27.505056  196618 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3731/.minikube/bin
	I1019 17:11:27.505615  196618 out.go:368] Setting JSON to false
	I1019 17:11:27.506901  196618 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3234,"bootTime":1760890654,"procs":252,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 17:11:27.506994  196618 start.go:143] virtualization: kvm guest
	I1019 17:11:27.509139  196618 out.go:179] * [NoKubernetes-212695] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 17:11:27.511430  196618 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 17:11:27.511436  196618 notify.go:221] Checking for updates...
	I1019 17:11:27.514119  196618 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 17:11:27.515477  196618 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-3731/kubeconfig
	I1019 17:11:27.516602  196618 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-3731/.minikube
	I1019 17:11:27.521778  196618 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 17:11:27.523142  196618 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 17:11:27.525026  196618 config.go:182] Loaded profile config "pause-111127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:11:27.525199  196618 config.go:182] Loaded profile config "running-upgrade-857401": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1019 17:11:27.525232  196618 start.go:1904] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1019 17:11:27.525329  196618 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 17:11:27.556255  196618 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1019 17:11:27.556359  196618 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:11:27.630149  196618 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-10-19 17:11:27.616003487 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 17:11:27.630243  196618 docker.go:319] overlay module found
	I1019 17:11:27.632341  196618 out.go:179] * Using the docker driver based on user configuration
	I1019 17:11:27.633817  196618 start.go:309] selected driver: docker
	I1019 17:11:27.633836  196618 start.go:930] validating driver "docker" against <nil>
	I1019 17:11:27.633860  196618 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 17:11:27.634719  196618 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:11:27.709052  196618 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-10-19 17:11:27.697593612 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 17:11:27.709195  196618 start.go:1904] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1019 17:11:27.709287  196618 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1019 17:11:27.709493  196618 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1019 17:11:27.712458  196618 out.go:179] * Using Docker driver with root privileges
	I1019 17:11:27.714102  196618 cni.go:84] Creating CNI manager for ""
	I1019 17:11:27.714179  196618 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:11:27.714189  196618 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1019 17:11:27.714218  196618 start.go:1904] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1019 17:11:27.714261  196618 start.go:353] cluster config:
	{Name:NoKubernetes-212695 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-212695 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:11:27.716650  196618 out.go:179] * Starting minikube without Kubernetes in cluster NoKubernetes-212695
	I1019 17:11:27.718006  196618 cache.go:124] Beginning downloading kic base image for docker with crio
	I1019 17:11:27.719648  196618 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 17:11:27.721026  196618 preload.go:183] Checking if preload exists for k8s version v0.0.0 and runtime crio
	I1019 17:11:27.721139  196618 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 17:11:27.745979  196618 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 17:11:27.746003  196618 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	W1019 17:11:27.750481  196618 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	W1019 17:11:27.835134  196618 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1019 17:11:27.835310  196618 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/NoKubernetes-212695/config.json ...
	I1019 17:11:27.835354  196618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/NoKubernetes-212695/config.json: {Name:mk7793a558040c6f54328b77084638b1589341d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:11:27.835545  196618 cache.go:233] Successfully downloaded all kic artifacts
	I1019 17:11:27.835574  196618 start.go:360] acquireMachinesLock for NoKubernetes-212695: {Name:mk7e033248e7d374e1a7d64069c85d94bda98a3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 17:11:27.835625  196618 start.go:364] duration metric: took 34.813µs to acquireMachinesLock for "NoKubernetes-212695"
	I1019 17:11:27.835644  196618 start.go:93] Provisioning new machine with config: &{Name:NoKubernetes-212695 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-212695 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 17:11:27.835760  196618 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Oct 19 17:11:23 pause-111127 crio[2137]: time="2025-10-19T17:11:23.559894936Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 19 17:11:23 pause-111127 crio[2137]: time="2025-10-19T17:11:23.561011309Z" level=info msg="Conmon does support the --sync option"
	Oct 19 17:11:23 pause-111127 crio[2137]: time="2025-10-19T17:11:23.561032989Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 19 17:11:23 pause-111127 crio[2137]: time="2025-10-19T17:11:23.561053021Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 19 17:11:23 pause-111127 crio[2137]: time="2025-10-19T17:11:23.561988066Z" level=info msg="Conmon does support the --sync option"
	Oct 19 17:11:23 pause-111127 crio[2137]: time="2025-10-19T17:11:23.562006196Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 19 17:11:23 pause-111127 crio[2137]: time="2025-10-19T17:11:23.566918916Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 17:11:23 pause-111127 crio[2137]: time="2025-10-19T17:11:23.566948348Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 17:11:23 pause-111127 crio[2137]: time="2025-10-19T17:11:23.567652479Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = true\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/c
ni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"/
var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Oct 19 17:11:23 pause-111127 crio[2137]: time="2025-10-19T17:11:23.568200759Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Oct 19 17:11:23 pause-111127 crio[2137]: time="2025-10-19T17:11:23.568331336Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Oct 19 17:11:23 pause-111127 crio[2137]: time="2025-10-19T17:11:23.575960609Z" level=info msg="No kernel support for IPv6: could not find nftables binary: exec: \"nft\": executable file not found in $PATH"
	Oct 19 17:11:23 pause-111127 crio[2137]: time="2025-10-19T17:11:23.640211813Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-mtdc9 Namespace:kube-system ID:4f7e21f5e0c53057a397453c1e9a447cf6962a4f26e273ba8319d54a686fcca1 UID:91847e5a-bd5a-401f-b542-dd1ba4db10c4 NetNS:/var/run/netns/073b6cf2-f62c-42cd-9124-5f1b73c3b4e6 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000314520}] Aliases:map[]}"
	Oct 19 17:11:23 pause-111127 crio[2137]: time="2025-10-19T17:11:23.640527383Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-mtdc9 for CNI network kindnet (type=ptp)"
	Oct 19 17:11:23 pause-111127 crio[2137]: time="2025-10-19T17:11:23.641165619Z" level=info msg="Registered SIGHUP reload watcher"
	Oct 19 17:11:23 pause-111127 crio[2137]: time="2025-10-19T17:11:23.641194169Z" level=info msg="Starting seccomp notifier watcher"
	Oct 19 17:11:23 pause-111127 crio[2137]: time="2025-10-19T17:11:23.641250337Z" level=info msg="Create NRI interface"
	Oct 19 17:11:23 pause-111127 crio[2137]: time="2025-10-19T17:11:23.641368101Z" level=info msg="built-in NRI default validator is disabled"
	Oct 19 17:11:23 pause-111127 crio[2137]: time="2025-10-19T17:11:23.641384922Z" level=info msg="runtime interface created"
	Oct 19 17:11:23 pause-111127 crio[2137]: time="2025-10-19T17:11:23.641396935Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Oct 19 17:11:23 pause-111127 crio[2137]: time="2025-10-19T17:11:23.641404602Z" level=info msg="runtime interface starting up..."
	Oct 19 17:11:23 pause-111127 crio[2137]: time="2025-10-19T17:11:23.641411403Z" level=info msg="starting plugins..."
	Oct 19 17:11:23 pause-111127 crio[2137]: time="2025-10-19T17:11:23.641426563Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Oct 19 17:11:23 pause-111127 crio[2137]: time="2025-10-19T17:11:23.641843431Z" level=info msg="No systemd watchdog enabled"
	Oct 19 17:11:23 pause-111127 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	1efa543c20e1f       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   12 seconds ago      Running             coredns                   0                   4f7e21f5e0c53       coredns-66bc5c9577-mtdc9               kube-system
	c42c8c02b9578       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   24 seconds ago      Running             kindnet-cni               0                   55c9520cb0fad       kindnet-df4b5                          kube-system
	6338abe7acb04       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   24 seconds ago      Running             kube-proxy                0                   d046cf6c14f94       kube-proxy-85snz                       kube-system
	44e8bdeb3906c       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   35 seconds ago      Running             kube-controller-manager   0                   19172e1b13b2e       kube-controller-manager-pause-111127   kube-system
	92b343f39375f       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   35 seconds ago      Running             kube-apiserver            0                   9891565ec27e3       kube-apiserver-pause-111127            kube-system
	c0703092dc529       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   35 seconds ago      Running             kube-scheduler            0                   2913be1771ea0       kube-scheduler-pause-111127            kube-system
	265e375056bed       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   35 seconds ago      Running             etcd                      0                   d4f47776d0f43       etcd-pause-111127                      kube-system
	
	
	==> coredns [1efa543c20e1f9a794164f98abafe4a7003b8d780f71eeeb2eb339f73cdfcae4] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38639 - 45969 "HINFO IN 3141851869736191747.2995879327024811340. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.065732336s
	
	
	==> describe nodes <==
	Name:               pause-111127
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-111127
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34
	                    minikube.k8s.io/name=pause-111127
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T17_11_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 17:10:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-111127
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 17:11:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 17:11:21 +0000   Sun, 19 Oct 2025 17:10:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 17:11:21 +0000   Sun, 19 Oct 2025 17:10:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 17:11:21 +0000   Sun, 19 Oct 2025 17:10:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 17:11:21 +0000   Sun, 19 Oct 2025 17:11:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-111127
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                0895bf17-05ba-448f-b065-e66b32096ae1
	  Boot ID:                    74c6737e-3c93-45ee-b810-4373d2d70b9b
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-mtdc9                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-pause-111127                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-df4b5                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-pause-111127             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-pause-111127    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-85snz                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-pause-111127             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 24s   kube-proxy       
	  Normal  Starting                 30s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s   kubelet          Node pause-111127 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s   kubelet          Node pause-111127 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s   kubelet          Node pause-111127 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s   node-controller  Node pause-111127 event: Registered Node pause-111127 in Controller
	  Normal  NodeReady                13s   kubelet          Node pause-111127 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.102617] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027869] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.458770] kauditd_printk_skb: 47 callbacks suppressed
	[Oct19 16:23] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.054620] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023908] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023878] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023848] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023943] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +2.047764] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +4.031517] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +8.127172] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[ +16.382276] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[Oct19 16:24] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	
	
	==> etcd [265e375056bed11db6c2a6e126a8448a0c1f620e6cce85eb3e05dfc7a03d9b2c] <==
	{"level":"warn","ts":"2025-10-19T17:10:56.961727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:10:56.975606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:10:56.983918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:10:56.992678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:10:57.006405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:10:57.016167Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:10:57.026057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:10:57.037187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:10:57.048806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:10:57.059422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:10:57.067503Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:10:57.079669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:10:57.088998Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:10:57.103621Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:10:57.137765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:10:57.163291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:10:57.171188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:10:57.176621Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:10:57.186143Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:10:57.195156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:10:57.211202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:10:57.226716Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:10:57.239707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:10:57.253620Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:10:57.337273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57828","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 17:11:30 up 53 min,  0 user,  load average: 5.03, 1.97, 1.15
	Linux pause-111127 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c42c8c02b9578597da919a7a18844a1559ae9ecefa496738a94be7df28c8ebb1] <==
	I1019 17:11:06.583450       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 17:11:06.583724       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1019 17:11:06.583872       1 main.go:148] setting mtu 1500 for CNI 
	I1019 17:11:06.583890       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 17:11:06.583920       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T17:11:06Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 17:11:06.882010       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 17:11:06.882105       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 17:11:06.882122       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 17:11:06.981438       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1019 17:11:07.282395       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 17:11:07.282465       1 metrics.go:72] Registering metrics
	I1019 17:11:07.282697       1 controller.go:711] "Syncing nftables rules"
	I1019 17:11:16.886175       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 17:11:16.886210       1 main.go:301] handling current node
	I1019 17:11:26.886217       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 17:11:26.886252       1 main.go:301] handling current node
	
	
	==> kube-apiserver [92b343f39375fdd6258e52106199a7b0f14eff1dfffdd90a6107f8e1107f2c9f] <==
	I1019 17:10:58.111753       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1019 17:10:58.111991       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1019 17:10:58.112025       1 aggregator.go:171] initial CRD sync complete...
	I1019 17:10:58.112110       1 autoregister_controller.go:144] Starting autoregister controller
	I1019 17:10:58.112117       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1019 17:10:58.112124       1 cache.go:39] Caches are synced for autoregister controller
	I1019 17:10:58.114591       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 17:10:58.115635       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1019 17:10:59.011174       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1019 17:10:59.019091       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1019 17:10:59.019113       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 17:10:59.635915       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 17:10:59.679866       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 17:10:59.814457       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1019 17:10:59.820785       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1019 17:10:59.822026       1 controller.go:667] quota admission added evaluator for: endpoints
	I1019 17:10:59.826841       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 17:11:00.063308       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 17:11:00.654264       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1019 17:11:00.664697       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1019 17:11:00.675139       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1019 17:11:05.464036       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 17:11:05.469598       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 17:11:05.963592       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1019 17:11:06.165320       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [44e8bdeb3906c41a9abacf791748f69b481ae82adc9eda27359692fb4ceb0d11] <==
	I1019 17:11:05.059102       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1019 17:11:05.059161       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1019 17:11:05.059251       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-111127"
	I1019 17:11:05.059308       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1019 17:11:05.059362       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1019 17:11:05.060658       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1019 17:11:05.060796       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1019 17:11:05.060822       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1019 17:11:05.060892       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1019 17:11:05.061151       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1019 17:11:05.061457       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1019 17:11:05.061937       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1019 17:11:05.062018       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1019 17:11:05.062540       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1019 17:11:05.062758       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1019 17:11:05.067008       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1019 17:11:05.067128       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1019 17:11:05.067306       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1019 17:11:05.069662       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 17:11:05.071805       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 17:11:05.074967       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1019 17:11:05.083158       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1019 17:11:05.092515       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1019 17:11:05.096772       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 17:11:20.060861       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [6338abe7acb046a99cf86a6f0b9ae343154297b2fb9dca4f56e40933b5f37809] <==
	I1019 17:11:06.391631       1 server_linux.go:53] "Using iptables proxy"
	I1019 17:11:06.447300       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 17:11:06.547480       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 17:11:06.547516       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1019 17:11:06.547618       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 17:11:06.568344       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 17:11:06.568398       1 server_linux.go:132] "Using iptables Proxier"
	I1019 17:11:06.574616       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 17:11:06.575005       1 server.go:527] "Version info" version="v1.34.1"
	I1019 17:11:06.575043       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:11:06.576546       1 config.go:309] "Starting node config controller"
	I1019 17:11:06.576608       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 17:11:06.576620       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 17:11:06.576584       1 config.go:106] "Starting endpoint slice config controller"
	I1019 17:11:06.576629       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 17:11:06.576568       1 config.go:200] "Starting service config controller"
	I1019 17:11:06.576669       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 17:11:06.576822       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 17:11:06.576837       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 17:11:06.677747       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 17:11:06.677747       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 17:11:06.677788       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [c0703092dc529da4d2a824e54e41dc2e2df1e9193b6d185904e9a7bffe3a0905] <==
	E1019 17:10:58.074505       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1019 17:10:58.074533       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1019 17:10:58.074530       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 17:10:58.074578       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1019 17:10:58.074602       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1019 17:10:58.074639       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1019 17:10:58.074696       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1019 17:10:58.074703       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1019 17:10:58.930965       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1019 17:10:59.015587       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1019 17:10:59.068241       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1019 17:10:59.097588       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1019 17:10:59.124403       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1019 17:10:59.185846       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1019 17:10:59.211983       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1019 17:10:59.217243       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1019 17:10:59.219443       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1019 17:10:59.239420       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1019 17:10:59.240333       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 17:10:59.253015       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1019 17:10:59.385822       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1019 17:10:59.439122       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1019 17:10:59.447263       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1019 17:10:59.450255       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1019 17:11:00.967335       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 19 17:11:01 pause-111127 kubelet[1295]: E1019 17:11:01.682976    1295 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-111127\" already exists" pod="kube-system/kube-scheduler-pause-111127"
	Oct 19 17:11:01 pause-111127 kubelet[1295]: I1019 17:11:01.692441    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-111127" podStartSLOduration=1.692419573 podStartE2EDuration="1.692419573s" podCreationTimestamp="2025-10-19 17:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:11:01.656857231 +0000 UTC m=+1.185431824" watchObservedRunningTime="2025-10-19 17:11:01.692419573 +0000 UTC m=+1.220994165"
	Oct 19 17:11:01 pause-111127 kubelet[1295]: I1019 17:11:01.692791    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-111127" podStartSLOduration=1.6927743880000001 podStartE2EDuration="1.692774388s" podCreationTimestamp="2025-10-19 17:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:11:01.691851477 +0000 UTC m=+1.220426089" watchObservedRunningTime="2025-10-19 17:11:01.692774388 +0000 UTC m=+1.221348982"
	Oct 19 17:11:01 pause-111127 kubelet[1295]: I1019 17:11:01.707936    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-111127" podStartSLOduration=1.7079118709999999 podStartE2EDuration="1.707911871s" podCreationTimestamp="2025-10-19 17:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:11:01.707207195 +0000 UTC m=+1.235781792" watchObservedRunningTime="2025-10-19 17:11:01.707911871 +0000 UTC m=+1.236486462"
	Oct 19 17:11:01 pause-111127 kubelet[1295]: I1019 17:11:01.746603    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-111127" podStartSLOduration=1.746583725 podStartE2EDuration="1.746583725s" podCreationTimestamp="2025-10-19 17:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:11:01.734153681 +0000 UTC m=+1.262728279" watchObservedRunningTime="2025-10-19 17:11:01.746583725 +0000 UTC m=+1.275158318"
	Oct 19 17:11:05 pause-111127 kubelet[1295]: I1019 17:11:05.078194    1295 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 19 17:11:05 pause-111127 kubelet[1295]: I1019 17:11:05.078832    1295 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 19 17:11:06 pause-111127 kubelet[1295]: I1019 17:11:06.047795    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ac90e4d0-c40f-49a3-91aa-18ccef82a85f-kube-proxy\") pod \"kube-proxy-85snz\" (UID: \"ac90e4d0-c40f-49a3-91aa-18ccef82a85f\") " pod="kube-system/kube-proxy-85snz"
	Oct 19 17:11:06 pause-111127 kubelet[1295]: I1019 17:11:06.047844    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ac90e4d0-c40f-49a3-91aa-18ccef82a85f-xtables-lock\") pod \"kube-proxy-85snz\" (UID: \"ac90e4d0-c40f-49a3-91aa-18ccef82a85f\") " pod="kube-system/kube-proxy-85snz"
	Oct 19 17:11:06 pause-111127 kubelet[1295]: I1019 17:11:06.047867    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ac90e4d0-c40f-49a3-91aa-18ccef82a85f-lib-modules\") pod \"kube-proxy-85snz\" (UID: \"ac90e4d0-c40f-49a3-91aa-18ccef82a85f\") " pod="kube-system/kube-proxy-85snz"
	Oct 19 17:11:06 pause-111127 kubelet[1295]: I1019 17:11:06.047892    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bnxt\" (UniqueName: \"kubernetes.io/projected/ac90e4d0-c40f-49a3-91aa-18ccef82a85f-kube-api-access-2bnxt\") pod \"kube-proxy-85snz\" (UID: \"ac90e4d0-c40f-49a3-91aa-18ccef82a85f\") " pod="kube-system/kube-proxy-85snz"
	Oct 19 17:11:06 pause-111127 kubelet[1295]: I1019 17:11:06.047949    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/3a0bcd30-c0b2-473d-b929-6851cb6f387a-cni-cfg\") pod \"kindnet-df4b5\" (UID: \"3a0bcd30-c0b2-473d-b929-6851cb6f387a\") " pod="kube-system/kindnet-df4b5"
	Oct 19 17:11:06 pause-111127 kubelet[1295]: I1019 17:11:06.047989    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3a0bcd30-c0b2-473d-b929-6851cb6f387a-lib-modules\") pod \"kindnet-df4b5\" (UID: \"3a0bcd30-c0b2-473d-b929-6851cb6f387a\") " pod="kube-system/kindnet-df4b5"
	Oct 19 17:11:06 pause-111127 kubelet[1295]: I1019 17:11:06.048025    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3a0bcd30-c0b2-473d-b929-6851cb6f387a-xtables-lock\") pod \"kindnet-df4b5\" (UID: \"3a0bcd30-c0b2-473d-b929-6851cb6f387a\") " pod="kube-system/kindnet-df4b5"
	Oct 19 17:11:06 pause-111127 kubelet[1295]: I1019 17:11:06.048078    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mvzg\" (UniqueName: \"kubernetes.io/projected/3a0bcd30-c0b2-473d-b929-6851cb6f387a-kube-api-access-5mvzg\") pod \"kindnet-df4b5\" (UID: \"3a0bcd30-c0b2-473d-b929-6851cb6f387a\") " pod="kube-system/kindnet-df4b5"
	Oct 19 17:11:06 pause-111127 kubelet[1295]: I1019 17:11:06.686739    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-df4b5" podStartSLOduration=1.686719188 podStartE2EDuration="1.686719188s" podCreationTimestamp="2025-10-19 17:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:11:06.686585525 +0000 UTC m=+6.215160117" watchObservedRunningTime="2025-10-19 17:11:06.686719188 +0000 UTC m=+6.215293781"
	Oct 19 17:11:07 pause-111127 kubelet[1295]: I1019 17:11:07.789866    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-85snz" podStartSLOduration=2.789844161 podStartE2EDuration="2.789844161s" podCreationTimestamp="2025-10-19 17:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:11:06.70910188 +0000 UTC m=+6.237676475" watchObservedRunningTime="2025-10-19 17:11:07.789844161 +0000 UTC m=+7.318418754"
	Oct 19 17:11:17 pause-111127 kubelet[1295]: I1019 17:11:17.335875    1295 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 19 17:11:17 pause-111127 kubelet[1295]: I1019 17:11:17.432038    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91847e5a-bd5a-401f-b542-dd1ba4db10c4-config-volume\") pod \"coredns-66bc5c9577-mtdc9\" (UID: \"91847e5a-bd5a-401f-b542-dd1ba4db10c4\") " pod="kube-system/coredns-66bc5c9577-mtdc9"
	Oct 19 17:11:17 pause-111127 kubelet[1295]: I1019 17:11:17.432120    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfw5r\" (UniqueName: \"kubernetes.io/projected/91847e5a-bd5a-401f-b542-dd1ba4db10c4-kube-api-access-lfw5r\") pod \"coredns-66bc5c9577-mtdc9\" (UID: \"91847e5a-bd5a-401f-b542-dd1ba4db10c4\") " pod="kube-system/coredns-66bc5c9577-mtdc9"
	Oct 19 17:11:18 pause-111127 kubelet[1295]: I1019 17:11:18.727929    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-mtdc9" podStartSLOduration=12.727897857 podStartE2EDuration="12.727897857s" podCreationTimestamp="2025-10-19 17:11:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:11:18.716600334 +0000 UTC m=+18.245174947" watchObservedRunningTime="2025-10-19 17:11:18.727897857 +0000 UTC m=+18.256472450"
	Oct 19 17:11:27 pause-111127 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 19 17:11:27 pause-111127 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 19 17:11:27 pause-111127 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 19 17:11:27 pause-111127 systemd[1]: kubelet.service: Consumed 1.231s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-111127 -n pause-111127
I1019 17:11:31.219548    7228 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate2446589675/001:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1019 17:11:31.238899    7228 install.go:163] /tmp/TestKVMDriverInstallOrUpdate2446589675/001/docker-machine-driver-kvm2 version is 1.37.0
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-111127 -n pause-111127: exit status 2 (384.958283ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-111127 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-111127
helpers_test.go:243: (dbg) docker inspect pause-111127:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0dd616257780453b85409ae3f8ea9e45a8025442459515719e8a1f5fd00646c2",
	        "Created": "2025-10-19T17:10:42.581887371Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 182748,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T17:10:42.635672541Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/0dd616257780453b85409ae3f8ea9e45a8025442459515719e8a1f5fd00646c2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0dd616257780453b85409ae3f8ea9e45a8025442459515719e8a1f5fd00646c2/hostname",
	        "HostsPath": "/var/lib/docker/containers/0dd616257780453b85409ae3f8ea9e45a8025442459515719e8a1f5fd00646c2/hosts",
	        "LogPath": "/var/lib/docker/containers/0dd616257780453b85409ae3f8ea9e45a8025442459515719e8a1f5fd00646c2/0dd616257780453b85409ae3f8ea9e45a8025442459515719e8a1f5fd00646c2-json.log",
	        "Name": "/pause-111127",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "pause-111127:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-111127",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0dd616257780453b85409ae3f8ea9e45a8025442459515719e8a1f5fd00646c2",
	                "LowerDir": "/var/lib/docker/overlay2/d32a2042c8290958da48779dc06f77710819f8861ff6346be8e586b6f283c1be-init/diff:/var/lib/docker/overlay2/0516a6de822f96a429e000bb6523437ca93a66a6aecaba561c6abf45d0d1defe/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d32a2042c8290958da48779dc06f77710819f8861ff6346be8e586b6f283c1be/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d32a2042c8290958da48779dc06f77710819f8861ff6346be8e586b6f283c1be/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d32a2042c8290958da48779dc06f77710819f8861ff6346be8e586b6f283c1be/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-111127",
	                "Source": "/var/lib/docker/volumes/pause-111127/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-111127",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-111127",
	                "name.minikube.sigs.k8s.io": "pause-111127",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "db65f2e6ee23ba8c46277346351d0a39b8c08d0b3d1dc5f26629a086fd55af36",
	            "SandboxKey": "/var/run/docker/netns/db65f2e6ee23",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32983"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32984"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32987"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32985"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32986"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-111127": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ce:fc:ec:5a:c3:8c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "864ac5e1e6911a59c8edf516002d63887bbb913fe779feb67b767ddfe9621296",
	                    "EndpointID": "7a0fbb5c0903427a9268fb0c5641ef080d89a79c6c684369697833272b60cb59",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-111127",
	                        "0dd616257780"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-111127 -n pause-111127
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-111127 -n pause-111127: exit status 2 (377.055605ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-111127 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-111127 logs -n 25: (1.161664096s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                    ARGS                                                    │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p false-624324 sudo systemctl status docker --all --full --no-pager                                       │ false-624324             │ jenkins │ v1.37.0 │ 19 Oct 25 17:11 UTC │                     │
	│ ssh     │ -p false-624324 sudo systemctl cat docker --no-pager                                                       │ false-624324             │ jenkins │ v1.37.0 │ 19 Oct 25 17:11 UTC │                     │
	│ ssh     │ -p false-624324 sudo cat /etc/docker/daemon.json                                                           │ false-624324             │ jenkins │ v1.37.0 │ 19 Oct 25 17:11 UTC │                     │
	│ ssh     │ -p false-624324 sudo docker system info                                                                    │ false-624324             │ jenkins │ v1.37.0 │ 19 Oct 25 17:11 UTC │                     │
	│ ssh     │ -p false-624324 sudo systemctl status cri-docker --all --full --no-pager                                   │ false-624324             │ jenkins │ v1.37.0 │ 19 Oct 25 17:11 UTC │                     │
	│ ssh     │ -p false-624324 sudo systemctl cat cri-docker --no-pager                                                   │ false-624324             │ jenkins │ v1.37.0 │ 19 Oct 25 17:11 UTC │                     │
	│ ssh     │ -p false-624324 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                              │ false-624324             │ jenkins │ v1.37.0 │ 19 Oct 25 17:11 UTC │                     │
	│ ssh     │ -p false-624324 sudo cat /usr/lib/systemd/system/cri-docker.service                                        │ false-624324             │ jenkins │ v1.37.0 │ 19 Oct 25 17:11 UTC │                     │
	│ ssh     │ -p false-624324 sudo cri-dockerd --version                                                                 │ false-624324             │ jenkins │ v1.37.0 │ 19 Oct 25 17:11 UTC │                     │
	│ ssh     │ -p false-624324 sudo systemctl status containerd --all --full --no-pager                                   │ false-624324             │ jenkins │ v1.37.0 │ 19 Oct 25 17:11 UTC │                     │
	│ ssh     │ -p false-624324 sudo systemctl cat containerd --no-pager                                                   │ false-624324             │ jenkins │ v1.37.0 │ 19 Oct 25 17:11 UTC │                     │
	│ ssh     │ -p false-624324 sudo cat /lib/systemd/system/containerd.service                                            │ false-624324             │ jenkins │ v1.37.0 │ 19 Oct 25 17:11 UTC │                     │
	│ ssh     │ -p false-624324 sudo cat /etc/containerd/config.toml                                                       │ false-624324             │ jenkins │ v1.37.0 │ 19 Oct 25 17:11 UTC │                     │
	│ ssh     │ -p false-624324 sudo containerd config dump                                                                │ false-624324             │ jenkins │ v1.37.0 │ 19 Oct 25 17:11 UTC │                     │
	│ ssh     │ -p false-624324 sudo systemctl status crio --all --full --no-pager                                         │ false-624324             │ jenkins │ v1.37.0 │ 19 Oct 25 17:11 UTC │                     │
	│ ssh     │ -p false-624324 sudo systemctl cat crio --no-pager                                                         │ false-624324             │ jenkins │ v1.37.0 │ 19 Oct 25 17:11 UTC │                     │
	│ ssh     │ -p false-624324 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                               │ false-624324             │ jenkins │ v1.37.0 │ 19 Oct 25 17:11 UTC │                     │
	│ ssh     │ -p false-624324 sudo crio config                                                                           │ false-624324             │ jenkins │ v1.37.0 │ 19 Oct 25 17:11 UTC │                     │
	│ delete  │ -p false-624324                                                                                            │ false-624324             │ jenkins │ v1.37.0 │ 19 Oct 25 17:11 UTC │ 19 Oct 25 17:11 UTC │
	│ ssh     │ -p cilium-624324 sudo cat /etc/nsswitch.conf                                                               │ cilium-624324            │ jenkins │ v1.37.0 │ 19 Oct 25 17:11 UTC │                     │
	│ start   │ -p force-systemd-env-118963 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-env-118963 │ jenkins │ v1.37.0 │ 19 Oct 25 17:11 UTC │                     │
	│ ssh     │ -p cilium-624324 sudo cat /etc/hosts                                                                       │ cilium-624324            │ jenkins │ v1.37.0 │ 19 Oct 25 17:11 UTC │                     │
	│ ssh     │ -p cilium-624324 sudo cat /etc/resolv.conf                                                                 │ cilium-624324            │ jenkins │ v1.37.0 │ 19 Oct 25 17:11 UTC │                     │
	│ ssh     │ -p cilium-624324 sudo crictl pods                                                                          │ cilium-624324            │ jenkins │ v1.37.0 │ 19 Oct 25 17:11 UTC │                     │
	│ ssh     │ -p cilium-624324 sudo crictl ps --all                                                                      │ cilium-624324            │ jenkins │ v1.37.0 │ 19 Oct 25 17:11 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 17:11:31
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 17:11:31.291805  199745 out.go:360] Setting OutFile to fd 1 ...
	I1019 17:11:31.291939  199745 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:11:31.291958  199745 out.go:374] Setting ErrFile to fd 2...
	I1019 17:11:31.291966  199745 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:11:31.292307  199745 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3731/.minikube/bin
	I1019 17:11:31.292943  199745 out.go:368] Setting JSON to false
	I1019 17:11:31.294187  199745 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3237,"bootTime":1760890654,"procs":270,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 17:11:31.294271  199745 start.go:143] virtualization: kvm guest
	I1019 17:11:31.296987  199745 out.go:179] * [force-systemd-env-118963] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 17:11:31.299182  199745 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 17:11:31.299220  199745 notify.go:221] Checking for updates...
	I1019 17:11:31.302802  199745 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 17:11:31.304369  199745 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-3731/kubeconfig
	I1019 17:11:31.309727  199745 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-3731/.minikube
	I1019 17:11:31.311223  199745 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 17:11:31.312651  199745 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I1019 17:11:31.314778  199745 config.go:182] Loaded profile config "NoKubernetes-212695": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1019 17:11:31.314983  199745 config.go:182] Loaded profile config "pause-111127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:11:31.315125  199745 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 17:11:31.355784  199745 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1019 17:11:31.355910  199745 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:11:31.432322  199745 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:66 SystemTime:2025-10-19 17:11:31.418561419 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 17:11:31.432469  199745 docker.go:319] overlay module found
	I1019 17:11:31.435173  199745 out.go:179] * Using the docker driver based on user configuration
	I1019 17:11:31.436645  199745 start.go:309] selected driver: docker
	I1019 17:11:31.436666  199745 start.go:930] validating driver "docker" against <nil>
	I1019 17:11:31.436680  199745 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 17:11:31.437543  199745 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:11:31.513647  199745 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:66 SystemTime:2025-10-19 17:11:31.5018337 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 17:11:31.513897  199745 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1019 17:11:31.514221  199745 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1019 17:11:31.515935  199745 out.go:179] * Using Docker driver with root privileges
	I1019 17:11:31.517151  199745 cni.go:84] Creating CNI manager for ""
	I1019 17:11:31.517213  199745 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:11:31.517223  199745 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1019 17:11:31.517291  199745 start.go:353] cluster config:
	{Name:force-systemd-env-118963 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-118963 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:11:31.518574  199745 out.go:179] * Starting "force-systemd-env-118963" primary control-plane node in "force-systemd-env-118963" cluster
	I1019 17:11:31.520060  199745 cache.go:124] Beginning downloading kic base image for docker with crio
	I1019 17:11:31.521829  199745 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 17:11:31.523686  199745 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:11:31.523736  199745 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 17:11:31.523738  199745 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-3731/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1019 17:11:31.523751  199745 cache.go:59] Caching tarball of preloaded images
	I1019 17:11:31.523889  199745 preload.go:233] Found /home/jenkins/minikube-integration/21683-3731/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1019 17:11:31.523903  199745 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 17:11:31.524061  199745 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/force-systemd-env-118963/config.json ...
	I1019 17:11:31.524117  199745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/force-systemd-env-118963/config.json: {Name:mkabc158565e5b82191a86c210754bc8318ad9f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:11:31.552559  199745 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 17:11:31.552625  199745 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 17:11:31.552651  199745 cache.go:233] Successfully downloaded all kic artifacts
	I1019 17:11:31.552687  199745 start.go:360] acquireMachinesLock for force-systemd-env-118963: {Name:mkbdce16fa92359283653c192a664071c46c2ce0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 17:11:31.552817  199745 start.go:364] duration metric: took 80.781µs to acquireMachinesLock for "force-systemd-env-118963"
	I1019 17:11:31.552851  199745 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-118963 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-118963 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 17:11:31.552924  199745 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Oct 19 17:11:23 pause-111127 crio[2137]: time="2025-10-19T17:11:23.559894936Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 19 17:11:23 pause-111127 crio[2137]: time="2025-10-19T17:11:23.561011309Z" level=info msg="Conmon does support the --sync option"
	Oct 19 17:11:23 pause-111127 crio[2137]: time="2025-10-19T17:11:23.561032989Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 19 17:11:23 pause-111127 crio[2137]: time="2025-10-19T17:11:23.561053021Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 19 17:11:23 pause-111127 crio[2137]: time="2025-10-19T17:11:23.561988066Z" level=info msg="Conmon does support the --sync option"
	Oct 19 17:11:23 pause-111127 crio[2137]: time="2025-10-19T17:11:23.562006196Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 19 17:11:23 pause-111127 crio[2137]: time="2025-10-19T17:11:23.566918916Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 17:11:23 pause-111127 crio[2137]: time="2025-10-19T17:11:23.566948348Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 17:11:23 pause-111127 crio[2137]: time="2025-10-19T17:11:23.567652479Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = true\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/c
ni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"/
var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Oct 19 17:11:23 pause-111127 crio[2137]: time="2025-10-19T17:11:23.568200759Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Oct 19 17:11:23 pause-111127 crio[2137]: time="2025-10-19T17:11:23.568331336Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Oct 19 17:11:23 pause-111127 crio[2137]: time="2025-10-19T17:11:23.575960609Z" level=info msg="No kernel support for IPv6: could not find nftables binary: exec: \"nft\": executable file not found in $PATH"
	Oct 19 17:11:23 pause-111127 crio[2137]: time="2025-10-19T17:11:23.640211813Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-mtdc9 Namespace:kube-system ID:4f7e21f5e0c53057a397453c1e9a447cf6962a4f26e273ba8319d54a686fcca1 UID:91847e5a-bd5a-401f-b542-dd1ba4db10c4 NetNS:/var/run/netns/073b6cf2-f62c-42cd-9124-5f1b73c3b4e6 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000314520}] Aliases:map[]}"
	Oct 19 17:11:23 pause-111127 crio[2137]: time="2025-10-19T17:11:23.640527383Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-mtdc9 for CNI network kindnet (type=ptp)"
	Oct 19 17:11:23 pause-111127 crio[2137]: time="2025-10-19T17:11:23.641165619Z" level=info msg="Registered SIGHUP reload watcher"
	Oct 19 17:11:23 pause-111127 crio[2137]: time="2025-10-19T17:11:23.641194169Z" level=info msg="Starting seccomp notifier watcher"
	Oct 19 17:11:23 pause-111127 crio[2137]: time="2025-10-19T17:11:23.641250337Z" level=info msg="Create NRI interface"
	Oct 19 17:11:23 pause-111127 crio[2137]: time="2025-10-19T17:11:23.641368101Z" level=info msg="built-in NRI default validator is disabled"
	Oct 19 17:11:23 pause-111127 crio[2137]: time="2025-10-19T17:11:23.641384922Z" level=info msg="runtime interface created"
	Oct 19 17:11:23 pause-111127 crio[2137]: time="2025-10-19T17:11:23.641396935Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Oct 19 17:11:23 pause-111127 crio[2137]: time="2025-10-19T17:11:23.641404602Z" level=info msg="runtime interface starting up..."
	Oct 19 17:11:23 pause-111127 crio[2137]: time="2025-10-19T17:11:23.641411403Z" level=info msg="starting plugins..."
	Oct 19 17:11:23 pause-111127 crio[2137]: time="2025-10-19T17:11:23.641426563Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Oct 19 17:11:23 pause-111127 crio[2137]: time="2025-10-19T17:11:23.641843431Z" level=info msg="No systemd watchdog enabled"
	Oct 19 17:11:23 pause-111127 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	1efa543c20e1f       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   14 seconds ago      Running             coredns                   0                   4f7e21f5e0c53       coredns-66bc5c9577-mtdc9               kube-system
	c42c8c02b9578       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   26 seconds ago      Running             kindnet-cni               0                   55c9520cb0fad       kindnet-df4b5                          kube-system
	6338abe7acb04       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   26 seconds ago      Running             kube-proxy                0                   d046cf6c14f94       kube-proxy-85snz                       kube-system
	44e8bdeb3906c       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   37 seconds ago      Running             kube-controller-manager   0                   19172e1b13b2e       kube-controller-manager-pause-111127   kube-system
	92b343f39375f       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   37 seconds ago      Running             kube-apiserver            0                   9891565ec27e3       kube-apiserver-pause-111127            kube-system
	c0703092dc529       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   37 seconds ago      Running             kube-scheduler            0                   2913be1771ea0       kube-scheduler-pause-111127            kube-system
	265e375056bed       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   37 seconds ago      Running             etcd                      0                   d4f47776d0f43       etcd-pause-111127                      kube-system
	
	
	==> coredns [1efa543c20e1f9a794164f98abafe4a7003b8d780f71eeeb2eb339f73cdfcae4] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38639 - 45969 "HINFO IN 3141851869736191747.2995879327024811340. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.065732336s
	
	
	==> describe nodes <==
	Name:               pause-111127
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-111127
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34
	                    minikube.k8s.io/name=pause-111127
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T17_11_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 17:10:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-111127
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 17:11:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 17:11:21 +0000   Sun, 19 Oct 2025 17:10:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 17:11:21 +0000   Sun, 19 Oct 2025 17:10:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 17:11:21 +0000   Sun, 19 Oct 2025 17:10:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 17:11:21 +0000   Sun, 19 Oct 2025 17:11:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-111127
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                0895bf17-05ba-448f-b065-e66b32096ae1
	  Boot ID:                    74c6737e-3c93-45ee-b810-4373d2d70b9b
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-mtdc9                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-pause-111127                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-df4b5                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-pause-111127             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-pause-111127    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-85snz                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-pause-111127             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 26s   kube-proxy       
	  Normal  Starting                 32s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  32s   kubelet          Node pause-111127 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    32s   kubelet          Node pause-111127 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s   kubelet          Node pause-111127 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s   node-controller  Node pause-111127 event: Registered Node pause-111127 in Controller
	  Normal  NodeReady                15s   kubelet          Node pause-111127 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.102617] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027869] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.458770] kauditd_printk_skb: 47 callbacks suppressed
	[Oct19 16:23] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.054620] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023908] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023878] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023848] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023943] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +2.047764] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +4.031517] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +8.127172] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[ +16.382276] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[Oct19 16:24] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	
	
	==> etcd [265e375056bed11db6c2a6e126a8448a0c1f620e6cce85eb3e05dfc7a03d9b2c] <==
	{"level":"warn","ts":"2025-10-19T17:10:56.961727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:10:56.975606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:10:56.983918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:10:56.992678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:10:57.006405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:10:57.016167Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:10:57.026057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:10:57.037187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:10:57.048806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:10:57.059422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:10:57.067503Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:10:57.079669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:10:57.088998Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:10:57.103621Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:10:57.137765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:10:57.163291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:10:57.171188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:10:57.176621Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:10:57.186143Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:10:57.195156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:10:57.211202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:10:57.226716Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:10:57.239707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:10:57.253620Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:10:57.337273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57828","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 17:11:32 up 53 min,  0 user,  load average: 5.03, 1.97, 1.15
	Linux pause-111127 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c42c8c02b9578597da919a7a18844a1559ae9ecefa496738a94be7df28c8ebb1] <==
	I1019 17:11:06.583450       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 17:11:06.583724       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1019 17:11:06.583872       1 main.go:148] setting mtu 1500 for CNI 
	I1019 17:11:06.583890       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 17:11:06.583920       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T17:11:06Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 17:11:06.882010       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 17:11:06.882105       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 17:11:06.882122       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 17:11:06.981438       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1019 17:11:07.282395       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 17:11:07.282465       1 metrics.go:72] Registering metrics
	I1019 17:11:07.282697       1 controller.go:711] "Syncing nftables rules"
	I1019 17:11:16.886175       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 17:11:16.886210       1 main.go:301] handling current node
	I1019 17:11:26.886217       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 17:11:26.886252       1 main.go:301] handling current node
	
	
	==> kube-apiserver [92b343f39375fdd6258e52106199a7b0f14eff1dfffdd90a6107f8e1107f2c9f] <==
	I1019 17:10:58.111753       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1019 17:10:58.111991       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1019 17:10:58.112025       1 aggregator.go:171] initial CRD sync complete...
	I1019 17:10:58.112110       1 autoregister_controller.go:144] Starting autoregister controller
	I1019 17:10:58.112117       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1019 17:10:58.112124       1 cache.go:39] Caches are synced for autoregister controller
	I1019 17:10:58.114591       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 17:10:58.115635       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1019 17:10:59.011174       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1019 17:10:59.019091       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1019 17:10:59.019113       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 17:10:59.635915       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 17:10:59.679866       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 17:10:59.814457       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1019 17:10:59.820785       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1019 17:10:59.822026       1 controller.go:667] quota admission added evaluator for: endpoints
	I1019 17:10:59.826841       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 17:11:00.063308       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 17:11:00.654264       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1019 17:11:00.664697       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1019 17:11:00.675139       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1019 17:11:05.464036       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 17:11:05.469598       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 17:11:05.963592       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1019 17:11:06.165320       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [44e8bdeb3906c41a9abacf791748f69b481ae82adc9eda27359692fb4ceb0d11] <==
	I1019 17:11:05.059102       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1019 17:11:05.059161       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1019 17:11:05.059251       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-111127"
	I1019 17:11:05.059308       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1019 17:11:05.059362       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1019 17:11:05.060658       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1019 17:11:05.060796       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1019 17:11:05.060822       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1019 17:11:05.060892       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1019 17:11:05.061151       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1019 17:11:05.061457       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1019 17:11:05.061937       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1019 17:11:05.062018       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1019 17:11:05.062540       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1019 17:11:05.062758       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1019 17:11:05.067008       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1019 17:11:05.067128       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1019 17:11:05.067306       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1019 17:11:05.069662       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 17:11:05.071805       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 17:11:05.074967       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1019 17:11:05.083158       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1019 17:11:05.092515       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1019 17:11:05.096772       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 17:11:20.060861       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [6338abe7acb046a99cf86a6f0b9ae343154297b2fb9dca4f56e40933b5f37809] <==
	I1019 17:11:06.391631       1 server_linux.go:53] "Using iptables proxy"
	I1019 17:11:06.447300       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 17:11:06.547480       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 17:11:06.547516       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1019 17:11:06.547618       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 17:11:06.568344       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 17:11:06.568398       1 server_linux.go:132] "Using iptables Proxier"
	I1019 17:11:06.574616       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 17:11:06.575005       1 server.go:527] "Version info" version="v1.34.1"
	I1019 17:11:06.575043       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:11:06.576546       1 config.go:309] "Starting node config controller"
	I1019 17:11:06.576608       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 17:11:06.576620       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 17:11:06.576584       1 config.go:106] "Starting endpoint slice config controller"
	I1019 17:11:06.576629       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 17:11:06.576568       1 config.go:200] "Starting service config controller"
	I1019 17:11:06.576669       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 17:11:06.576822       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 17:11:06.576837       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 17:11:06.677747       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 17:11:06.677747       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 17:11:06.677788       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [c0703092dc529da4d2a824e54e41dc2e2df1e9193b6d185904e9a7bffe3a0905] <==
	E1019 17:10:58.074505       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1019 17:10:58.074533       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1019 17:10:58.074530       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 17:10:58.074578       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1019 17:10:58.074602       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1019 17:10:58.074639       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1019 17:10:58.074696       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1019 17:10:58.074703       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1019 17:10:58.930965       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1019 17:10:59.015587       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1019 17:10:59.068241       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1019 17:10:59.097588       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1019 17:10:59.124403       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1019 17:10:59.185846       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1019 17:10:59.211983       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1019 17:10:59.217243       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1019 17:10:59.219443       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1019 17:10:59.239420       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1019 17:10:59.240333       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 17:10:59.253015       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1019 17:10:59.385822       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1019 17:10:59.439122       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1019 17:10:59.447263       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1019 17:10:59.450255       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1019 17:11:00.967335       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 19 17:11:01 pause-111127 kubelet[1295]: E1019 17:11:01.682976    1295 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-111127\" already exists" pod="kube-system/kube-scheduler-pause-111127"
	Oct 19 17:11:01 pause-111127 kubelet[1295]: I1019 17:11:01.692441    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-111127" podStartSLOduration=1.692419573 podStartE2EDuration="1.692419573s" podCreationTimestamp="2025-10-19 17:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:11:01.656857231 +0000 UTC m=+1.185431824" watchObservedRunningTime="2025-10-19 17:11:01.692419573 +0000 UTC m=+1.220994165"
	Oct 19 17:11:01 pause-111127 kubelet[1295]: I1019 17:11:01.692791    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-111127" podStartSLOduration=1.6927743880000001 podStartE2EDuration="1.692774388s" podCreationTimestamp="2025-10-19 17:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:11:01.691851477 +0000 UTC m=+1.220426089" watchObservedRunningTime="2025-10-19 17:11:01.692774388 +0000 UTC m=+1.221348982"
	Oct 19 17:11:01 pause-111127 kubelet[1295]: I1019 17:11:01.707936    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-111127" podStartSLOduration=1.7079118709999999 podStartE2EDuration="1.707911871s" podCreationTimestamp="2025-10-19 17:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:11:01.707207195 +0000 UTC m=+1.235781792" watchObservedRunningTime="2025-10-19 17:11:01.707911871 +0000 UTC m=+1.236486462"
	Oct 19 17:11:01 pause-111127 kubelet[1295]: I1019 17:11:01.746603    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-111127" podStartSLOduration=1.746583725 podStartE2EDuration="1.746583725s" podCreationTimestamp="2025-10-19 17:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:11:01.734153681 +0000 UTC m=+1.262728279" watchObservedRunningTime="2025-10-19 17:11:01.746583725 +0000 UTC m=+1.275158318"
	Oct 19 17:11:05 pause-111127 kubelet[1295]: I1019 17:11:05.078194    1295 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 19 17:11:05 pause-111127 kubelet[1295]: I1019 17:11:05.078832    1295 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 19 17:11:06 pause-111127 kubelet[1295]: I1019 17:11:06.047795    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ac90e4d0-c40f-49a3-91aa-18ccef82a85f-kube-proxy\") pod \"kube-proxy-85snz\" (UID: \"ac90e4d0-c40f-49a3-91aa-18ccef82a85f\") " pod="kube-system/kube-proxy-85snz"
	Oct 19 17:11:06 pause-111127 kubelet[1295]: I1019 17:11:06.047844    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ac90e4d0-c40f-49a3-91aa-18ccef82a85f-xtables-lock\") pod \"kube-proxy-85snz\" (UID: \"ac90e4d0-c40f-49a3-91aa-18ccef82a85f\") " pod="kube-system/kube-proxy-85snz"
	Oct 19 17:11:06 pause-111127 kubelet[1295]: I1019 17:11:06.047867    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ac90e4d0-c40f-49a3-91aa-18ccef82a85f-lib-modules\") pod \"kube-proxy-85snz\" (UID: \"ac90e4d0-c40f-49a3-91aa-18ccef82a85f\") " pod="kube-system/kube-proxy-85snz"
	Oct 19 17:11:06 pause-111127 kubelet[1295]: I1019 17:11:06.047892    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bnxt\" (UniqueName: \"kubernetes.io/projected/ac90e4d0-c40f-49a3-91aa-18ccef82a85f-kube-api-access-2bnxt\") pod \"kube-proxy-85snz\" (UID: \"ac90e4d0-c40f-49a3-91aa-18ccef82a85f\") " pod="kube-system/kube-proxy-85snz"
	Oct 19 17:11:06 pause-111127 kubelet[1295]: I1019 17:11:06.047949    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/3a0bcd30-c0b2-473d-b929-6851cb6f387a-cni-cfg\") pod \"kindnet-df4b5\" (UID: \"3a0bcd30-c0b2-473d-b929-6851cb6f387a\") " pod="kube-system/kindnet-df4b5"
	Oct 19 17:11:06 pause-111127 kubelet[1295]: I1019 17:11:06.047989    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3a0bcd30-c0b2-473d-b929-6851cb6f387a-lib-modules\") pod \"kindnet-df4b5\" (UID: \"3a0bcd30-c0b2-473d-b929-6851cb6f387a\") " pod="kube-system/kindnet-df4b5"
	Oct 19 17:11:06 pause-111127 kubelet[1295]: I1019 17:11:06.048025    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3a0bcd30-c0b2-473d-b929-6851cb6f387a-xtables-lock\") pod \"kindnet-df4b5\" (UID: \"3a0bcd30-c0b2-473d-b929-6851cb6f387a\") " pod="kube-system/kindnet-df4b5"
	Oct 19 17:11:06 pause-111127 kubelet[1295]: I1019 17:11:06.048078    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mvzg\" (UniqueName: \"kubernetes.io/projected/3a0bcd30-c0b2-473d-b929-6851cb6f387a-kube-api-access-5mvzg\") pod \"kindnet-df4b5\" (UID: \"3a0bcd30-c0b2-473d-b929-6851cb6f387a\") " pod="kube-system/kindnet-df4b5"
	Oct 19 17:11:06 pause-111127 kubelet[1295]: I1019 17:11:06.686739    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-df4b5" podStartSLOduration=1.686719188 podStartE2EDuration="1.686719188s" podCreationTimestamp="2025-10-19 17:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:11:06.686585525 +0000 UTC m=+6.215160117" watchObservedRunningTime="2025-10-19 17:11:06.686719188 +0000 UTC m=+6.215293781"
	Oct 19 17:11:07 pause-111127 kubelet[1295]: I1019 17:11:07.789866    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-85snz" podStartSLOduration=2.789844161 podStartE2EDuration="2.789844161s" podCreationTimestamp="2025-10-19 17:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:11:06.70910188 +0000 UTC m=+6.237676475" watchObservedRunningTime="2025-10-19 17:11:07.789844161 +0000 UTC m=+7.318418754"
	Oct 19 17:11:17 pause-111127 kubelet[1295]: I1019 17:11:17.335875    1295 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 19 17:11:17 pause-111127 kubelet[1295]: I1019 17:11:17.432038    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91847e5a-bd5a-401f-b542-dd1ba4db10c4-config-volume\") pod \"coredns-66bc5c9577-mtdc9\" (UID: \"91847e5a-bd5a-401f-b542-dd1ba4db10c4\") " pod="kube-system/coredns-66bc5c9577-mtdc9"
	Oct 19 17:11:17 pause-111127 kubelet[1295]: I1019 17:11:17.432120    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfw5r\" (UniqueName: \"kubernetes.io/projected/91847e5a-bd5a-401f-b542-dd1ba4db10c4-kube-api-access-lfw5r\") pod \"coredns-66bc5c9577-mtdc9\" (UID: \"91847e5a-bd5a-401f-b542-dd1ba4db10c4\") " pod="kube-system/coredns-66bc5c9577-mtdc9"
	Oct 19 17:11:18 pause-111127 kubelet[1295]: I1019 17:11:18.727929    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-mtdc9" podStartSLOduration=12.727897857 podStartE2EDuration="12.727897857s" podCreationTimestamp="2025-10-19 17:11:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:11:18.716600334 +0000 UTC m=+18.245174947" watchObservedRunningTime="2025-10-19 17:11:18.727897857 +0000 UTC m=+18.256472450"
	Oct 19 17:11:27 pause-111127 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 19 17:11:27 pause-111127 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 19 17:11:27 pause-111127 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 19 17:11:27 pause-111127 systemd[1]: kubelet.service: Consumed 1.231s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-111127 -n pause-111127
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-111127 -n pause-111127: exit status 2 (371.567703ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-111127 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (6.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-904967 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-904967 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (282.836031ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:14:01Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-904967 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-904967 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-904967 describe deploy/metrics-server -n kube-system: exit status 1 (81.024031ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-904967 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-904967
helpers_test.go:243: (dbg) docker inspect old-k8s-version-904967:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c0f82ef529f88e13692e76592a85ee8e2eb1cb4f227f77088f9bbbb0466db719",
	        "Created": "2025-10-19T17:13:07.590891639Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 228789,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T17:13:07.630094377Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/c0f82ef529f88e13692e76592a85ee8e2eb1cb4f227f77088f9bbbb0466db719/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c0f82ef529f88e13692e76592a85ee8e2eb1cb4f227f77088f9bbbb0466db719/hostname",
	        "HostsPath": "/var/lib/docker/containers/c0f82ef529f88e13692e76592a85ee8e2eb1cb4f227f77088f9bbbb0466db719/hosts",
	        "LogPath": "/var/lib/docker/containers/c0f82ef529f88e13692e76592a85ee8e2eb1cb4f227f77088f9bbbb0466db719/c0f82ef529f88e13692e76592a85ee8e2eb1cb4f227f77088f9bbbb0466db719-json.log",
	        "Name": "/old-k8s-version-904967",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-904967:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-904967",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c0f82ef529f88e13692e76592a85ee8e2eb1cb4f227f77088f9bbbb0466db719",
	                "LowerDir": "/var/lib/docker/overlay2/305a170662898a69b3b459b30af2aee1e923f246f5b4b75beb501c15e4bfc402-init/diff:/var/lib/docker/overlay2/0516a6de822f96a429e000bb6523437ca93a66a6aecaba561c6abf45d0d1defe/diff",
	                "MergedDir": "/var/lib/docker/overlay2/305a170662898a69b3b459b30af2aee1e923f246f5b4b75beb501c15e4bfc402/merged",
	                "UpperDir": "/var/lib/docker/overlay2/305a170662898a69b3b459b30af2aee1e923f246f5b4b75beb501c15e4bfc402/diff",
	                "WorkDir": "/var/lib/docker/overlay2/305a170662898a69b3b459b30af2aee1e923f246f5b4b75beb501c15e4bfc402/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-904967",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-904967/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-904967",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-904967",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-904967",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2d51701a0504a8170ad582ea083dfa5978a8f41ae8d45c02a675f672c44887be",
	            "SandboxKey": "/var/run/docker/netns/2d51701a0504",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33049"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33050"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33053"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33051"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33052"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-904967": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a6:93:43:d6:5f:0d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cfc938debfe6129c7f9048a0a383817f7b1fb100af5f0af7c3b32f6517e76495",
	                    "EndpointID": "4cc8238fd7a575ed0638bb6c16ee14232d484b3fc0bfd448e50ba0260a23b528",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-904967",
	                        "c0f82ef529f8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-904967 -n old-k8s-version-904967
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-904967 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-904967 logs -n 25: (1.518892115s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ stop    │ -p scheduled-stop-575331 --schedule 5m                                                                                                                                                                                                        │ scheduled-stop-575331     │ jenkins │ v1.37.0 │ 19 Oct 25 17:08 UTC │                     │
	│ stop    │ -p scheduled-stop-575331 --schedule 5m                                                                                                                                                                                                        │ scheduled-stop-575331     │ jenkins │ v1.37.0 │ 19 Oct 25 17:08 UTC │                     │
	│ stop    │ -p scheduled-stop-575331 --schedule 5m                                                                                                                                                                                                        │ scheduled-stop-575331     │ jenkins │ v1.37.0 │ 19 Oct 25 17:08 UTC │                     │
	│ stop    │ -p scheduled-stop-575331 --schedule 15s                                                                                                                                                                                                       │ scheduled-stop-575331     │ jenkins │ v1.37.0 │ 19 Oct 25 17:08 UTC │                     │
	│ stop    │ -p scheduled-stop-575331 --schedule 15s                                                                                                                                                                                                       │ scheduled-stop-575331     │ jenkins │ v1.37.0 │ 19 Oct 25 17:08 UTC │                     │
	│ stop    │ -p scheduled-stop-575331 --schedule 15s                                                                                                                                                                                                       │ scheduled-stop-575331     │ jenkins │ v1.37.0 │ 19 Oct 25 17:08 UTC │                     │
	│ stop    │ -p scheduled-stop-575331 --cancel-scheduled                                                                                                                                                                                                   │ scheduled-stop-575331     │ jenkins │ v1.37.0 │ 19 Oct 25 17:08 UTC │ 19 Oct 25 17:08 UTC │
	│ stop    │ -p scheduled-stop-575331 --schedule 15s                                                                                                                                                                                                       │ scheduled-stop-575331     │ jenkins │ v1.37.0 │ 19 Oct 25 17:09 UTC │                     │
	│ stop    │ -p scheduled-stop-575331 --schedule 15s                                                                                                                                                                                                       │ scheduled-stop-575331     │ jenkins │ v1.37.0 │ 19 Oct 25 17:09 UTC │                     │
	│ stop    │ -p scheduled-stop-575331 --schedule 15s                                                                                                                                                                                                       │ scheduled-stop-575331     │ jenkins │ v1.37.0 │ 19 Oct 25 17:09 UTC │ 19 Oct 25 17:09 UTC │
	│ delete  │ -p scheduled-stop-575331                                                                                                                                                                                                                      │ scheduled-stop-575331     │ jenkins │ v1.37.0 │ 19 Oct 25 17:10 UTC │ 19 Oct 25 17:10 UTC │
	│ stop    │ -p kubernetes-upgrade-318879                                                                                                                                                                                                                  │ kubernetes-upgrade-318879 │ jenkins │ v1.37.0 │ 19 Oct 25 17:12 UTC │ 19 Oct 25 17:12 UTC │
	│ ssh     │ cert-options-639932 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-639932       │ jenkins │ v1.37.0 │ 19 Oct 25 17:12 UTC │ 19 Oct 25 17:12 UTC │
	│ ssh     │ -p cert-options-639932 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-639932       │ jenkins │ v1.37.0 │ 19 Oct 25 17:12 UTC │ 19 Oct 25 17:12 UTC │
	│ delete  │ -p cert-options-639932                                                                                                                                                                                                                        │ cert-options-639932       │ jenkins │ v1.37.0 │ 19 Oct 25 17:12 UTC │ 19 Oct 25 17:12 UTC │
	│ start   │ -p kubernetes-upgrade-318879 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-318879 │ jenkins │ v1.37.0 │ 19 Oct 25 17:12 UTC │                     │
	│ start   │ -p missing-upgrade-447724 --memory=3072 --driver=docker  --container-runtime=crio                                                                                                                                                             │ missing-upgrade-447724    │ jenkins │ v1.32.0 │ 19 Oct 25 17:12 UTC │ 19 Oct 25 17:12 UTC │
	│ stop    │ stopped-upgrade-659566 stop                                                                                                                                                                                                                   │ stopped-upgrade-659566    │ jenkins │ v1.32.0 │ 19 Oct 25 17:12 UTC │ 19 Oct 25 17:12 UTC │
	│ start   │ -p stopped-upgrade-659566 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                      │ stopped-upgrade-659566    │ jenkins │ v1.37.0 │ 19 Oct 25 17:12 UTC │ 19 Oct 25 17:12 UTC │
	│ start   │ -p missing-upgrade-447724 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                      │ missing-upgrade-447724    │ jenkins │ v1.37.0 │ 19 Oct 25 17:12 UTC │ 19 Oct 25 17:13 UTC │
	│ delete  │ -p stopped-upgrade-659566                                                                                                                                                                                                                     │ stopped-upgrade-659566    │ jenkins │ v1.37.0 │ 19 Oct 25 17:12 UTC │ 19 Oct 25 17:13 UTC │
	│ start   │ -p old-k8s-version-904967 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-904967    │ jenkins │ v1.37.0 │ 19 Oct 25 17:13 UTC │ 19 Oct 25 17:13 UTC │
	│ delete  │ -p missing-upgrade-447724                                                                                                                                                                                                                     │ missing-upgrade-447724    │ jenkins │ v1.37.0 │ 19 Oct 25 17:13 UTC │ 19 Oct 25 17:13 UTC │
	│ start   │ -p no-preload-806996 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-806996         │ jenkins │ v1.37.0 │ 19 Oct 25 17:13 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-904967 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-904967    │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 17:13:33
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 17:13:33.361233  234083 out.go:360] Setting OutFile to fd 1 ...
	I1019 17:13:33.361534  234083 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:13:33.361544  234083 out.go:374] Setting ErrFile to fd 2...
	I1019 17:13:33.361549  234083 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:13:33.361745  234083 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3731/.minikube/bin
	I1019 17:13:33.362284  234083 out.go:368] Setting JSON to false
	I1019 17:13:33.363534  234083 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3359,"bootTime":1760890654,"procs":305,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 17:13:33.363640  234083 start.go:143] virtualization: kvm guest
	I1019 17:13:33.366002  234083 out.go:179] * [no-preload-806996] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 17:13:33.367447  234083 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 17:13:33.367487  234083 notify.go:221] Checking for updates...
	I1019 17:13:33.369842  234083 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 17:13:33.371158  234083 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-3731/kubeconfig
	I1019 17:13:33.372508  234083 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-3731/.minikube
	I1019 17:13:33.373633  234083 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 17:13:33.374790  234083 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 17:13:33.376688  234083 config.go:182] Loaded profile config "cert-expiration-132648": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:13:33.376856  234083 config.go:182] Loaded profile config "kubernetes-upgrade-318879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:13:33.376993  234083 config.go:182] Loaded profile config "old-k8s-version-904967": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1019 17:13:33.377147  234083 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 17:13:33.403036  234083 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1019 17:13:33.403169  234083 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:13:33.463960  234083 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-19 17:13:33.452486903 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 17:13:33.464074  234083 docker.go:319] overlay module found
	I1019 17:13:33.465855  234083 out.go:179] * Using the docker driver based on user configuration
	I1019 17:13:28.728526  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:13:28.729024  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1019 17:13:29.228623  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:13:29.229055  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1019 17:13:29.728350  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:13:29.728826  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1019 17:13:30.228238  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:13:30.228671  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1019 17:13:30.728231  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:13:30.728695  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1019 17:13:31.228209  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:13:31.228659  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1019 17:13:31.728352  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:13:31.728737  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1019 17:13:32.228480  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:13:32.228906  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1019 17:13:32.728245  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:13:32.728729  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1019 17:13:33.228267  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:13:33.228702  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1019 17:13:33.467139  234083 start.go:309] selected driver: docker
	I1019 17:13:33.467178  234083 start.go:930] validating driver "docker" against <nil>
	I1019 17:13:33.467191  234083 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 17:13:33.467779  234083 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:13:33.529435  234083 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-19 17:13:33.517695885 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 17:13:33.529619  234083 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1019 17:13:33.529845  234083 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 17:13:33.531410  234083 out.go:179] * Using Docker driver with root privileges
	I1019 17:13:33.532568  234083 cni.go:84] Creating CNI manager for ""
	I1019 17:13:33.532632  234083 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:13:33.532642  234083 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1019 17:13:33.532706  234083 start.go:353] cluster config:
	{Name:no-preload-806996 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-806996 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:13:33.534108  234083 out.go:179] * Starting "no-preload-806996" primary control-plane node in "no-preload-806996" cluster
	I1019 17:13:33.535348  234083 cache.go:124] Beginning downloading kic base image for docker with crio
	I1019 17:13:33.536552  234083 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 17:13:33.537616  234083 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:13:33.537663  234083 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 17:13:33.537780  234083 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/no-preload-806996/config.json ...
	I1019 17:13:33.537822  234083 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/no-preload-806996/config.json: {Name:mkccb034c3245ece1a81c204fc647b8a009b7b58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:13:33.537842  234083 cache.go:107] acquiring lock: {Name:mkd4c9a40e7430d9bf09e8979d5e571a9b628683 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 17:13:33.537890  234083 cache.go:107] acquiring lock: {Name:mkec9786025a254b142af09aa57068460be887b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 17:13:33.537958  234083 cache.go:115] /home/jenkins/minikube-integration/21683-3731/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1019 17:13:33.537922  234083 cache.go:107] acquiring lock: {Name:mkc2159c83d33184d0d183c332bc6852c49a486c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 17:13:33.537965  234083 cache.go:107] acquiring lock: {Name:mkbd89dd84f5b7da38c8e38efc7a7b0880afa8b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 17:13:33.537976  234083 cache.go:107] acquiring lock: {Name:mk2610fbaf4f8126dbc8c3c1c7de8957ad5c61a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 17:13:33.538021  234083 cache.go:107] acquiring lock: {Name:mk31bc6f73cf909301146e6f32ef5c375bdc4e3c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 17:13:33.538029  234083 cache.go:107] acquiring lock: {Name:mk6fbaed29c42193734b4fd3d6e11c5ac2a78f48 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 17:13:33.538046  234083 cache.go:107] acquiring lock: {Name:mke0990bafc9bd18924c80f06997e1cc0f9d2814 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 17:13:33.538059  234083 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1019 17:13:33.537971  234083 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21683-3731/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 144.64µs
	I1019 17:13:33.538103  234083 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21683-3731/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1019 17:13:33.538179  234083 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1019 17:13:33.538205  234083 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1019 17:13:33.538208  234083 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1019 17:13:33.538181  234083 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1019 17:13:33.538252  234083 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1019 17:13:33.538286  234083 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1019 17:13:33.539563  234083 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1019 17:13:33.539564  234083 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1019 17:13:33.539584  234083 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1019 17:13:33.539563  234083 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1019 17:13:33.539564  234083 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1019 17:13:33.539645  234083 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1019 17:13:33.539651  234083 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1019 17:13:33.562415  234083 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 17:13:33.562434  234083 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 17:13:33.562449  234083 cache.go:233] Successfully downloaded all kic artifacts
	I1019 17:13:33.562473  234083 start.go:360] acquireMachinesLock for no-preload-806996: {Name:mkf02ba6d8f5da746fd7e0b3107a7d1ee6226ae8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 17:13:33.562565  234083 start.go:364] duration metric: took 76.125µs to acquireMachinesLock for "no-preload-806996"
	I1019 17:13:33.562589  234083 start.go:93] Provisioning new machine with config: &{Name:no-preload-806996 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-806996 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 17:13:33.562661  234083 start.go:125] createHost starting for "" (driver="docker")
	I1019 17:13:32.133215  228157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:13:32.633525  228157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:13:33.133259  228157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:13:33.633164  228157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:13:34.133369  228157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:13:34.632806  228157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:13:34.749107  228157 kubeadm.go:1114] duration metric: took 12.19527577s to wait for elevateKubeSystemPrivileges
	I1019 17:13:34.749229  228157 kubeadm.go:403] duration metric: took 22.112838715s to StartCluster
	I1019 17:13:34.749299  228157 settings.go:142] acquiring lock: {Name:mk205bd8d663b4a1850184d0f589700c0d2429b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:13:34.749448  228157 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-3731/kubeconfig
	I1019 17:13:34.751488  228157 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/kubeconfig: {Name:mk4cdfafbafeae33568865a60cf929c656d55c5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:13:34.752650  228157 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1019 17:13:34.752723  228157 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 17:13:34.752791  228157 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 17:13:34.752880  228157 config.go:182] Loaded profile config "old-k8s-version-904967": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1019 17:13:34.752906  228157 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-904967"
	I1019 17:13:34.752927  228157 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-904967"
	I1019 17:13:34.752941  228157 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-904967"
	I1019 17:13:34.752958  228157 host.go:66] Checking if "old-k8s-version-904967" exists ...
	I1019 17:13:34.752969  228157 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-904967"
	I1019 17:13:34.753512  228157 cli_runner.go:164] Run: docker container inspect old-k8s-version-904967 --format={{.State.Status}}
	I1019 17:13:34.753649  228157 cli_runner.go:164] Run: docker container inspect old-k8s-version-904967 --format={{.State.Status}}
	I1019 17:13:34.755159  228157 out.go:179] * Verifying Kubernetes components...
	I1019 17:13:34.758239  228157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:13:34.791165  228157 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 17:13:34.792591  228157 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:13:34.792663  228157 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 17:13:34.792857  228157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-904967
	I1019 17:13:34.804932  228157 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-904967"
	I1019 17:13:34.804990  228157 host.go:66] Checking if "old-k8s-version-904967" exists ...
	I1019 17:13:34.806624  228157 cli_runner.go:164] Run: docker container inspect old-k8s-version-904967 --format={{.State.Status}}
	I1019 17:13:34.841469  228157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33049 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/old-k8s-version-904967/id_rsa Username:docker}
	I1019 17:13:34.868509  228157 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 17:13:34.868607  228157 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 17:13:34.868701  228157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-904967
	I1019 17:13:34.900630  228157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33049 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/old-k8s-version-904967/id_rsa Username:docker}
	I1019 17:13:34.920975  228157 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1019 17:13:34.988528  228157 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:13:34.997226  228157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:13:35.055395  228157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 17:13:35.295428  228157 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1019 17:13:35.298394  228157 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-904967" to be "Ready" ...
	I1019 17:13:35.572570  228157 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1019 17:13:35.587501  228157 addons.go:515] duration metric: took 834.702001ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1019 17:13:35.802681  228157 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-904967" context rescaled to 1 replicas
	I1019 17:13:33.565546  234083 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1019 17:13:33.565761  234083 start.go:159] libmachine.API.Create for "no-preload-806996" (driver="docker")
	I1019 17:13:33.565792  234083 client.go:171] LocalClient.Create starting
	I1019 17:13:33.565879  234083 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem
	I1019 17:13:33.565910  234083 main.go:143] libmachine: Decoding PEM data...
	I1019 17:13:33.565925  234083 main.go:143] libmachine: Parsing certificate...
	I1019 17:13:33.565981  234083 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-3731/.minikube/certs/cert.pem
	I1019 17:13:33.566000  234083 main.go:143] libmachine: Decoding PEM data...
	I1019 17:13:33.566014  234083 main.go:143] libmachine: Parsing certificate...
	I1019 17:13:33.566387  234083 cli_runner.go:164] Run: docker network inspect no-preload-806996 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1019 17:13:33.585823  234083 cli_runner.go:211] docker network inspect no-preload-806996 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1019 17:13:33.585926  234083 network_create.go:284] running [docker network inspect no-preload-806996] to gather additional debugging logs...
	I1019 17:13:33.585953  234083 cli_runner.go:164] Run: docker network inspect no-preload-806996
	W1019 17:13:33.606612  234083 cli_runner.go:211] docker network inspect no-preload-806996 returned with exit code 1
	I1019 17:13:33.606637  234083 network_create.go:287] error running [docker network inspect no-preload-806996]: docker network inspect no-preload-806996: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-806996 not found
	I1019 17:13:33.606649  234083 network_create.go:289] output of [docker network inspect no-preload-806996]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-806996 not found
	
	** /stderr **
	I1019 17:13:33.606763  234083 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 17:13:33.625951  234083 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-96cf7041f267 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:7a:ea:91:e3:37:25} reservation:<nil>}
	I1019 17:13:33.626744  234083 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-0f2c415cfca9 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:96:f0:8a:e9:5f:de} reservation:<nil>}
	I1019 17:13:33.627541  234083 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ca739aebb768 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:a6:81:0d:b3:5e:ec} reservation:<nil>}
	I1019 17:13:33.628401  234083 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c01030}
	I1019 17:13:33.628429  234083 network_create.go:124] attempt to create docker network no-preload-806996 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1019 17:13:33.628489  234083 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-806996 no-preload-806996
	I1019 17:13:33.694877  234083 network_create.go:108] docker network no-preload-806996 192.168.76.0/24 created
	I1019 17:13:33.694919  234083 kic.go:121] calculated static IP "192.168.76.2" for the "no-preload-806996" container
	I1019 17:13:33.694992  234083 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1019 17:13:33.715978  234083 cli_runner.go:164] Run: docker volume create no-preload-806996 --label name.minikube.sigs.k8s.io=no-preload-806996 --label created_by.minikube.sigs.k8s.io=true
	I1019 17:13:33.735807  234083 oci.go:103] Successfully created a docker volume no-preload-806996
	I1019 17:13:33.735891  234083 cli_runner.go:164] Run: docker run --rm --name no-preload-806996-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-806996 --entrypoint /usr/bin/test -v no-preload-806996:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1019 17:13:33.748776  234083 cache.go:162] opening:  /home/jenkins/minikube-integration/21683-3731/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1019 17:13:33.760635  234083 cache.go:162] opening:  /home/jenkins/minikube-integration/21683-3731/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1019 17:13:33.762282  234083 cache.go:162] opening:  /home/jenkins/minikube-integration/21683-3731/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1019 17:13:33.764139  234083 cache.go:162] opening:  /home/jenkins/minikube-integration/21683-3731/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1019 17:13:33.770628  234083 cache.go:162] opening:  /home/jenkins/minikube-integration/21683-3731/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1019 17:13:33.793924  234083 cache.go:162] opening:  /home/jenkins/minikube-integration/21683-3731/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1019 17:13:33.800400  234083 cache.go:162] opening:  /home/jenkins/minikube-integration/21683-3731/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1019 17:13:33.871453  234083 cache.go:157] /home/jenkins/minikube-integration/21683-3731/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1019 17:13:33.871487  234083 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21683-3731/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 333.566471ms
	I1019 17:13:33.871505  234083 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21683-3731/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1019 17:13:34.167542  234083 cache.go:157] /home/jenkins/minikube-integration/21683-3731/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1019 17:13:34.167567  234083 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21683-3731/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 629.67805ms
	I1019 17:13:34.167581  234083 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21683-3731/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1019 17:13:34.179318  234083 oci.go:107] Successfully prepared a docker volume no-preload-806996
	I1019 17:13:34.179351  234083 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1019 17:13:34.179447  234083 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1019 17:13:34.179474  234083 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1019 17:13:34.179509  234083 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1019 17:13:34.243618  234083 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-806996 --name no-preload-806996 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-806996 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-806996 --network no-preload-806996 --ip 192.168.76.2 --volume no-preload-806996:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1019 17:13:34.531733  234083 cli_runner.go:164] Run: docker container inspect no-preload-806996 --format={{.State.Running}}
	I1019 17:13:34.551385  234083 cli_runner.go:164] Run: docker container inspect no-preload-806996 --format={{.State.Status}}
	I1019 17:13:34.571527  234083 cli_runner.go:164] Run: docker exec no-preload-806996 stat /var/lib/dpkg/alternatives/iptables
	I1019 17:13:34.627329  234083 oci.go:144] the created container "no-preload-806996" has a running status.
	I1019 17:13:34.627367  234083 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-3731/.minikube/machines/no-preload-806996/id_rsa...
	I1019 17:13:34.737204  234083 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-3731/.minikube/machines/no-preload-806996/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1019 17:13:34.788951  234083 cli_runner.go:164] Run: docker container inspect no-preload-806996 --format={{.State.Status}}
	I1019 17:13:34.837538  234083 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1019 17:13:34.837567  234083 kic_runner.go:114] Args: [docker exec --privileged no-preload-806996 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1019 17:13:34.915510  234083 cli_runner.go:164] Run: docker container inspect no-preload-806996 --format={{.State.Status}}
	I1019 17:13:34.948376  234083 machine.go:94] provisionDockerMachine start ...
	I1019 17:13:34.948490  234083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-806996
	I1019 17:13:34.991008  234083 main.go:143] libmachine: Using SSH client type: native
	I1019 17:13:34.991397  234083 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33059 <nil> <nil>}
	I1019 17:13:34.991422  234083 main.go:143] libmachine: About to run SSH command:
	hostname
	I1019 17:13:34.992327  234083 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60552->127.0.0.1:33059: read: connection reset by peer
	I1019 17:13:35.375061  234083 cache.go:157] /home/jenkins/minikube-integration/21683-3731/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1019 17:13:35.375239  234083 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21683-3731/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.837214285s
	I1019 17:13:35.375304  234083 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21683-3731/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1019 17:13:35.396111  234083 cache.go:157] /home/jenkins/minikube-integration/21683-3731/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1019 17:13:35.396144  234083 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21683-3731/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.858102342s
	I1019 17:13:35.396165  234083 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21683-3731/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1019 17:13:35.420653  234083 cache.go:157] /home/jenkins/minikube-integration/21683-3731/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1019 17:13:35.420691  234083 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21683-3731/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.882713445s
	I1019 17:13:35.420710  234083 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21683-3731/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1019 17:13:35.451606  234083 cache.go:157] /home/jenkins/minikube-integration/21683-3731/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1019 17:13:35.451641  234083 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21683-3731/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.913793696s
	I1019 17:13:35.451655  234083 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21683-3731/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1019 17:13:35.809142  234083 cache.go:157] /home/jenkins/minikube-integration/21683-3731/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1019 17:13:35.809178  234083 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21683-3731/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 2.271298359s
	I1019 17:13:35.809198  234083 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21683-3731/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1019 17:13:35.809218  234083 cache.go:87] Successfully saved all images to host disk.
	I1019 17:13:38.130915  234083 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-806996
	
	I1019 17:13:38.130949  234083 ubuntu.go:182] provisioning hostname "no-preload-806996"
	I1019 17:13:38.131029  234083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-806996
	I1019 17:13:38.150148  234083 main.go:143] libmachine: Using SSH client type: native
	I1019 17:13:38.150376  234083 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33059 <nil> <nil>}
	I1019 17:13:38.150390  234083 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-806996 && echo "no-preload-806996" | sudo tee /etc/hostname
	I1019 17:13:38.297216  234083 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-806996
	
	I1019 17:13:38.297315  234083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-806996
	I1019 17:13:38.316837  234083 main.go:143] libmachine: Using SSH client type: native
	I1019 17:13:38.317133  234083 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33059 <nil> <nil>}
	I1019 17:13:38.317155  234083 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-806996' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-806996/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-806996' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 17:13:33.728470  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:13:33.728932  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1019 17:13:34.228225  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:13:34.228533  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1019 17:13:34.732140  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:13:34.732543  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1019 17:13:35.228794  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:13:35.229266  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1019 17:13:35.727802  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:13:35.728354  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1019 17:13:36.227803  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:13:36.228264  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1019 17:13:36.727823  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:13:36.728290  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1019 17:13:37.227829  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:13:37.228313  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1019 17:13:37.727855  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:13:37.728313  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1019 17:13:38.227816  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:13:38.228203  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1019 17:13:38.454241  234083 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1019 17:13:38.454273  234083 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-3731/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-3731/.minikube}
	I1019 17:13:38.454303  234083 ubuntu.go:190] setting up certificates
	I1019 17:13:38.454318  234083 provision.go:84] configureAuth start
	I1019 17:13:38.454379  234083 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-806996
	I1019 17:13:38.473739  234083 provision.go:143] copyHostCerts
	I1019 17:13:38.473803  234083 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-3731/.minikube/ca.pem, removing ...
	I1019 17:13:38.473815  234083 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-3731/.minikube/ca.pem
	I1019 17:13:38.473905  234083 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-3731/.minikube/ca.pem (1082 bytes)
	I1019 17:13:38.474059  234083 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-3731/.minikube/cert.pem, removing ...
	I1019 17:13:38.474099  234083 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-3731/.minikube/cert.pem
	I1019 17:13:38.474146  234083 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-3731/.minikube/cert.pem (1123 bytes)
	I1019 17:13:38.474228  234083 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-3731/.minikube/key.pem, removing ...
	I1019 17:13:38.474241  234083 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-3731/.minikube/key.pem
	I1019 17:13:38.474279  234083 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-3731/.minikube/key.pem (1679 bytes)
	I1019 17:13:38.474350  234083 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-3731/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca-key.pem org=jenkins.no-preload-806996 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-806996]
	I1019 17:13:38.788086  234083 provision.go:177] copyRemoteCerts
	I1019 17:13:38.788151  234083 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 17:13:38.788188  234083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-806996
	I1019 17:13:38.807392  234083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33059 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/no-preload-806996/id_rsa Username:docker}
	I1019 17:13:38.906341  234083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 17:13:38.928175  234083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1019 17:13:38.947353  234083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 17:13:38.966580  234083 provision.go:87] duration metric: took 512.245689ms to configureAuth
	I1019 17:13:38.966612  234083 ubuntu.go:206] setting minikube options for container-runtime
	I1019 17:13:38.966832  234083 config.go:182] Loaded profile config "no-preload-806996": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:13:38.966947  234083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-806996
	I1019 17:13:38.986894  234083 main.go:143] libmachine: Using SSH client type: native
	I1019 17:13:38.987208  234083 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33059 <nil> <nil>}
	I1019 17:13:38.987231  234083 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 17:13:39.236472  234083 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 17:13:39.236508  234083 machine.go:97] duration metric: took 4.288108646s to provisionDockerMachine
	I1019 17:13:39.236523  234083 client.go:174] duration metric: took 5.670722042s to LocalClient.Create
	I1019 17:13:39.236548  234083 start.go:167] duration metric: took 5.67078772s to libmachine.API.Create "no-preload-806996"
	I1019 17:13:39.236564  234083 start.go:293] postStartSetup for "no-preload-806996" (driver="docker")
	I1019 17:13:39.236583  234083 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 17:13:39.236660  234083 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 17:13:39.236714  234083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-806996
	I1019 17:13:39.256661  234083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33059 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/no-preload-806996/id_rsa Username:docker}
	I1019 17:13:39.356454  234083 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 17:13:39.360439  234083 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 17:13:39.360471  234083 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 17:13:39.360498  234083 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-3731/.minikube/addons for local assets ...
	I1019 17:13:39.360564  234083 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-3731/.minikube/files for local assets ...
	I1019 17:13:39.360680  234083 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-3731/.minikube/files/etc/ssl/certs/72282.pem -> 72282.pem in /etc/ssl/certs
	I1019 17:13:39.360808  234083 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 17:13:39.369206  234083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/files/etc/ssl/certs/72282.pem --> /etc/ssl/certs/72282.pem (1708 bytes)
	I1019 17:13:39.391223  234083 start.go:296] duration metric: took 154.640322ms for postStartSetup
	I1019 17:13:39.391595  234083 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-806996
	I1019 17:13:39.410496  234083 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/no-preload-806996/config.json ...
	I1019 17:13:39.410823  234083 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 17:13:39.410881  234083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-806996
	I1019 17:13:39.430196  234083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33059 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/no-preload-806996/id_rsa Username:docker}
	I1019 17:13:39.525928  234083 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 17:13:39.531272  234083 start.go:128] duration metric: took 5.968594226s to createHost
	I1019 17:13:39.531314  234083 start.go:83] releasing machines lock for "no-preload-806996", held for 5.968722335s
	I1019 17:13:39.531390  234083 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-806996
	I1019 17:13:39.549956  234083 ssh_runner.go:195] Run: cat /version.json
	I1019 17:13:39.550015  234083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-806996
	I1019 17:13:39.550051  234083 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 17:13:39.550154  234083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-806996
	I1019 17:13:39.569527  234083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33059 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/no-preload-806996/id_rsa Username:docker}
	I1019 17:13:39.569918  234083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33059 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/no-preload-806996/id_rsa Username:docker}
	I1019 17:13:39.720200  234083 ssh_runner.go:195] Run: systemctl --version
	I1019 17:13:39.727325  234083 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 17:13:39.766842  234083 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 17:13:39.771935  234083 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 17:13:39.772008  234083 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 17:13:39.799930  234083 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1019 17:13:39.799955  234083 start.go:496] detecting cgroup driver to use...
	I1019 17:13:39.799988  234083 detect.go:190] detected "systemd" cgroup driver on host os
	I1019 17:13:39.800037  234083 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 17:13:39.816905  234083 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 17:13:39.831267  234083 docker.go:218] disabling cri-docker service (if available) ...
	I1019 17:13:39.831329  234083 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 17:13:39.849500  234083 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 17:13:39.868399  234083 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 17:13:39.956048  234083 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 17:13:40.046155  234083 docker.go:234] disabling docker service ...
	I1019 17:13:40.046226  234083 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 17:13:40.067842  234083 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 17:13:40.082138  234083 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 17:13:40.169996  234083 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 17:13:40.258850  234083 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 17:13:40.272335  234083 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 17:13:40.287564  234083 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 17:13:40.287623  234083 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:13:40.298801  234083 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1019 17:13:40.298875  234083 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:13:40.309125  234083 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:13:40.318738  234083 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:13:40.328833  234083 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 17:13:40.338029  234083 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:13:40.348218  234083 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:13:40.363306  234083 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:13:40.373373  234083 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 17:13:40.381833  234083 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 17:13:40.390424  234083 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:13:40.474183  234083 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 17:13:40.879533  234083 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 17:13:40.879607  234083 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 17:13:40.884051  234083 start.go:564] Will wait 60s for crictl version
	I1019 17:13:40.884158  234083 ssh_runner.go:195] Run: which crictl
	I1019 17:13:40.888000  234083 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 17:13:40.913204  234083 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 17:13:40.913284  234083 ssh_runner.go:195] Run: crio --version
	I1019 17:13:40.943029  234083 ssh_runner.go:195] Run: crio --version
	I1019 17:13:40.974148  234083 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1019 17:13:37.302314  228157 node_ready.go:57] node "old-k8s-version-904967" has "Ready":"False" status (will retry)
	W1019 17:13:39.802292  228157 node_ready.go:57] node "old-k8s-version-904967" has "Ready":"False" status (will retry)
	I1019 17:13:40.975722  234083 cli_runner.go:164] Run: docker network inspect no-preload-806996 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 17:13:40.994677  234083 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1019 17:13:40.999425  234083 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:13:41.011197  234083 kubeadm.go:884] updating cluster {Name:no-preload-806996 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-806996 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 17:13:41.011333  234083 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:13:41.011373  234083 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:13:41.038435  234083 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1019 17:13:41.038466  234083 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1019 17:13:41.038536  234083 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 17:13:41.038542  234083 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1019 17:13:41.038578  234083 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1019 17:13:41.038605  234083 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1019 17:13:41.038615  234083 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1019 17:13:41.038580  234083 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1019 17:13:41.038726  234083 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1019 17:13:41.038553  234083 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1019 17:13:41.039829  234083 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 17:13:41.040020  234083 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1019 17:13:41.040058  234083 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1019 17:13:41.040085  234083 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1019 17:13:41.040105  234083 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1019 17:13:41.040106  234083 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1019 17:13:41.040115  234083 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1019 17:13:41.040019  234083 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1019 17:13:41.212946  234083 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1019 17:13:41.220528  234083 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1019 17:13:41.223523  234083 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1019 17:13:41.232364  234083 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1019 17:13:41.253182  234083 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115" in container runtime
	I1019 17:13:41.253243  234083 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1019 17:13:41.253292  234083 ssh_runner.go:195] Run: which crictl
	I1019 17:13:41.262244  234083 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1019 17:13:41.262316  234083 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1019 17:13:41.264752  234083 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1019 17:13:41.264803  234083 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1019 17:13:41.264839  234083 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813" in container runtime
	I1019 17:13:41.264890  234083 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1019 17:13:41.264939  234083 ssh_runner.go:195] Run: which crictl
	I1019 17:13:41.264849  234083 ssh_runner.go:195] Run: which crictl
	I1019 17:13:41.274633  234083 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97" in container runtime
	I1019 17:13:41.274683  234083 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1019 17:13:41.274732  234083 ssh_runner.go:195] Run: which crictl
	I1019 17:13:41.274740  234083 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1019 17:13:41.303993  234083 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7" in container runtime
	I1019 17:13:41.304039  234083 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1019 17:13:41.304109  234083 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1019 17:13:41.304146  234083 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1019 17:13:41.304152  234083 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1019 17:13:41.304172  234083 ssh_runner.go:195] Run: which crictl
	I1019 17:13:41.304202  234083 ssh_runner.go:195] Run: which crictl
	I1019 17:13:41.304226  234083 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1019 17:13:41.307332  234083 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1019 17:13:41.307343  234083 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1019 17:13:41.328217  234083 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1019 17:13:41.338163  234083 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1019 17:13:41.338221  234083 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1019 17:13:41.338172  234083 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1019 17:13:41.338260  234083 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1019 17:13:41.341590  234083 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1019 17:13:41.341671  234083 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1019 17:13:41.383170  234083 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f" in container runtime
	I1019 17:13:41.383218  234083 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1019 17:13:41.383242  234083 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1019 17:13:41.383252  234083 ssh_runner.go:195] Run: which crictl
	I1019 17:13:41.383180  234083 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1019 17:13:41.383325  234083 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1019 17:13:41.383410  234083 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1019 17:13:41.383449  234083 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21683-3731/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1019 17:13:41.383524  234083 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1019 17:13:41.383596  234083 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1019 17:13:41.416370  234083 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1019 17:13:41.418274  234083 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21683-3731/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1019 17:13:41.418300  234083 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1019 17:13:41.418363  234083 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21683-3731/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1019 17:13:41.418424  234083 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1019 17:13:41.418437  234083 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1019 17:13:41.418366  234083 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1019 17:13:41.421906  234083 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21683-3731/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1019 17:13:41.421914  234083 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1019 17:13:41.421945  234083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (74320896 bytes)
	I1019 17:13:41.422018  234083 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1019 17:13:41.473183  234083 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1019 17:13:41.473197  234083 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21683-3731/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1019 17:13:41.473240  234083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1019 17:13:41.473252  234083 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21683-3731/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1019 17:13:41.473280  234083 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1019 17:13:41.473307  234083 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1019 17:13:41.473307  234083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (17396736 bytes)
	I1019 17:13:41.473322  234083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (27073024 bytes)
	I1019 17:13:41.473220  234083 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1019 17:13:41.473335  234083 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1019 17:13:41.473324  234083 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1019 17:13:41.511187  234083 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 17:13:41.541380  234083 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1019 17:13:41.541442  234083 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1019 17:13:41.541453  234083 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1019 17:13:41.541474  234083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (25966080 bytes)
	I1019 17:13:41.541483  234083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1019 17:13:41.626113  234083 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1019 17:13:41.626164  234083 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 17:13:41.626218  234083 ssh_runner.go:195] Run: which crictl
	I1019 17:13:41.627574  234083 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21683-3731/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1019 17:13:41.627666  234083 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1019 17:13:41.639886  234083 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1019 17:13:41.639962  234083 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1019 17:13:41.661535  234083 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 17:13:41.662371  234083 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1019 17:13:41.662412  234083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (22831104 bytes)
	I1019 17:13:42.134351  234083 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21683-3731/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1019 17:13:42.134391  234083 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1019 17:13:42.134438  234083 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1019 17:13:42.134438  234083 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 17:13:42.164053  234083 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 17:13:43.212272  234083 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.077803396s)
	I1019 17:13:43.212301  234083 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21683-3731/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1019 17:13:43.212322  234083 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1019 17:13:43.212320  234083 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.048222071s)
	I1019 17:13:43.212359  234083 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1019 17:13:43.212365  234083 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21683-3731/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1019 17:13:43.212467  234083 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1019 17:13:38.728094  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:13:38.728570  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1019 17:13:39.228182  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:13:39.228621  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1019 17:13:39.728227  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:13:39.728697  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1019 17:13:40.228223  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:13:40.228643  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1019 17:13:40.728424  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:13:40.728957  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1019 17:13:41.228771  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:13:41.229148  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1019 17:13:41.727770  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:13:41.728209  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1019 17:13:42.228108  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:13:42.228636  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1019 17:13:42.728228  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:13:42.728772  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1019 17:13:43.228267  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:13:43.228679  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	W1019 17:13:41.901792  228157 node_ready.go:57] node "old-k8s-version-904967" has "Ready":"False" status (will retry)
	W1019 17:13:44.301996  228157 node_ready.go:57] node "old-k8s-version-904967" has "Ready":"False" status (will retry)
	W1019 17:13:46.302112  228157 node_ready.go:57] node "old-k8s-version-904967" has "Ready":"False" status (will retry)
	I1019 17:13:44.471926  234083 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (1.259541598s)
	I1019 17:13:44.471961  234083 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21683-3731/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1019 17:13:44.471989  234083 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1019 17:13:44.472032  234083 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1019 17:13:44.472098  234083 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.259604508s)
	I1019 17:13:44.472133  234083 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1019 17:13:44.472161  234083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1019 17:13:45.892817  234083 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.420757406s)
	I1019 17:13:45.892842  234083 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21683-3731/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1019 17:13:45.892868  234083 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1019 17:13:45.892905  234083 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1019 17:13:47.035490  234083 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.142560723s)
	I1019 17:13:47.035522  234083 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21683-3731/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1019 17:13:47.035555  234083 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1019 17:13:47.035607  234083 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1019 17:13:48.151925  234083 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.116286862s)
	I1019 17:13:48.151958  234083 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21683-3731/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1019 17:13:48.151989  234083 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1019 17:13:48.152039  234083 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1019 17:13:43.728524  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:13:43.729008  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1019 17:13:44.228811  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:13:44.229229  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1019 17:13:44.727898  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:13:44.728476  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1019 17:13:45.228233  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1019 17:13:45.228323  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1019 17:13:45.259148  219832 cri.go:89] found id: "4379237a6a992d0971da96f5ae580fa977b7f5ae05e9ffa12df83b527ec9d4cc"
	I1019 17:13:45.259170  219832 cri.go:89] found id: ""
	I1019 17:13:45.259184  219832 logs.go:282] 1 containers: [4379237a6a992d0971da96f5ae580fa977b7f5ae05e9ffa12df83b527ec9d4cc]
	I1019 17:13:45.259242  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:13:45.263755  219832 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1019 17:13:45.263833  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1019 17:13:45.292885  219832 cri.go:89] found id: ""
	I1019 17:13:45.292912  219832 logs.go:282] 0 containers: []
	W1019 17:13:45.292922  219832 logs.go:284] No container was found matching "etcd"
	I1019 17:13:45.292930  219832 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1019 17:13:45.292995  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1019 17:13:45.323096  219832 cri.go:89] found id: ""
	I1019 17:13:45.323122  219832 logs.go:282] 0 containers: []
	W1019 17:13:45.323133  219832 logs.go:284] No container was found matching "coredns"
	I1019 17:13:45.323140  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1019 17:13:45.323199  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1019 17:13:45.352382  219832 cri.go:89] found id: "409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:13:45.352406  219832 cri.go:89] found id: ""
	I1019 17:13:45.352416  219832 logs.go:282] 1 containers: [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f]
	I1019 17:13:45.352476  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:13:45.357095  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1019 17:13:45.357158  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1019 17:13:45.389298  219832 cri.go:89] found id: ""
	I1019 17:13:45.389323  219832 logs.go:282] 0 containers: []
	W1019 17:13:45.389331  219832 logs.go:284] No container was found matching "kube-proxy"
	I1019 17:13:45.389336  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1019 17:13:45.389391  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1019 17:13:45.418154  219832 cri.go:89] found id: "a804d6a79a2786825f0f4c7fd6547d3a0952cd13888056690ff445193dc544cc"
	I1019 17:13:45.418181  219832 cri.go:89] found id: ""
	I1019 17:13:45.418192  219832 logs.go:282] 1 containers: [a804d6a79a2786825f0f4c7fd6547d3a0952cd13888056690ff445193dc544cc]
	I1019 17:13:45.418271  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:13:45.422566  219832 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1019 17:13:45.422637  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1019 17:13:45.453889  219832 cri.go:89] found id: ""
	I1019 17:13:45.453919  219832 logs.go:282] 0 containers: []
	W1019 17:13:45.453928  219832 logs.go:284] No container was found matching "kindnet"
	I1019 17:13:45.453934  219832 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1019 17:13:45.454000  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1019 17:13:45.483224  219832 cri.go:89] found id: ""
	I1019 17:13:45.483253  219832 logs.go:282] 0 containers: []
	W1019 17:13:45.483265  219832 logs.go:284] No container was found matching "storage-provisioner"
	I1019 17:13:45.483277  219832 logs.go:123] Gathering logs for describe nodes ...
	I1019 17:13:45.483291  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1019 17:13:45.547582  219832 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1019 17:13:45.547609  219832 logs.go:123] Gathering logs for kube-apiserver [4379237a6a992d0971da96f5ae580fa977b7f5ae05e9ffa12df83b527ec9d4cc] ...
	I1019 17:13:45.547626  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4379237a6a992d0971da96f5ae580fa977b7f5ae05e9ffa12df83b527ec9d4cc"
	I1019 17:13:45.585838  219832 logs.go:123] Gathering logs for kube-scheduler [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f] ...
	I1019 17:13:45.585877  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:13:45.637776  219832 logs.go:123] Gathering logs for kube-controller-manager [a804d6a79a2786825f0f4c7fd6547d3a0952cd13888056690ff445193dc544cc] ...
	I1019 17:13:45.637820  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a804d6a79a2786825f0f4c7fd6547d3a0952cd13888056690ff445193dc544cc"
	I1019 17:13:45.668593  219832 logs.go:123] Gathering logs for CRI-O ...
	I1019 17:13:45.668623  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1019 17:13:45.715688  219832 logs.go:123] Gathering logs for container status ...
	I1019 17:13:45.715729  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1019 17:13:45.753737  219832 logs.go:123] Gathering logs for kubelet ...
	I1019 17:13:45.753772  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1019 17:13:45.844163  219832 logs.go:123] Gathering logs for dmesg ...
	I1019 17:13:45.844202  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1019 17:13:48.361139  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	W1019 17:13:48.302383  228157 node_ready.go:57] node "old-k8s-version-904967" has "Ready":"False" status (will retry)
	I1019 17:13:48.802801  228157 node_ready.go:49] node "old-k8s-version-904967" is "Ready"
	I1019 17:13:48.802840  228157 node_ready.go:38] duration metric: took 13.5044141s for node "old-k8s-version-904967" to be "Ready" ...
	I1019 17:13:48.802857  228157 api_server.go:52] waiting for apiserver process to appear ...
	I1019 17:13:48.802919  228157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 17:13:48.819027  228157 api_server.go:72] duration metric: took 14.066252069s to wait for apiserver process to appear ...
	I1019 17:13:48.819052  228157 api_server.go:88] waiting for apiserver healthz status ...
	I1019 17:13:48.819086  228157 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1019 17:13:48.824587  228157 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1019 17:13:48.826263  228157 api_server.go:141] control plane version: v1.28.0
	I1019 17:13:48.826294  228157 api_server.go:131] duration metric: took 7.233611ms to wait for apiserver health ...
	I1019 17:13:48.826315  228157 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 17:13:48.831590  228157 system_pods.go:59] 8 kube-system pods found
	I1019 17:13:48.831652  228157 system_pods.go:61] "coredns-5dd5756b68-qdvcm" [02f42850-84fc-4535-a60e-e2fa878a54a3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:13:48.831668  228157 system_pods.go:61] "etcd-old-k8s-version-904967" [327496fa-418e-4496-bb02-ad0c8589bb43] Running
	I1019 17:13:48.831676  228157 system_pods.go:61] "kindnet-lh8rm" [d76c47ed-ecd9-4b78-ac32-f6bd8a848989] Running
	I1019 17:13:48.831681  228157 system_pods.go:61] "kube-apiserver-old-k8s-version-904967" [fb738bfd-165e-416d-8ae0-e82db6a6574c] Running
	I1019 17:13:48.831687  228157 system_pods.go:61] "kube-controller-manager-old-k8s-version-904967" [3e3ec032-e8bf-46a3-b21e-19df08813e26] Running
	I1019 17:13:48.831692  228157 system_pods.go:61] "kube-proxy-gr6m9" [a804e301-f032-4125-9a56-00a958db2a49] Running
	I1019 17:13:48.831697  228157 system_pods.go:61] "kube-scheduler-old-k8s-version-904967" [14562e32-ca16-49c7-8c82-77ef0f687921] Running
	I1019 17:13:48.831706  228157 system_pods.go:61] "storage-provisioner" [0ec8b184-a07e-4609-9c21-00812610abb6] Running
	I1019 17:13:48.831715  228157 system_pods.go:74] duration metric: took 5.392681ms to wait for pod list to return data ...
	I1019 17:13:48.831725  228157 default_sa.go:34] waiting for default service account to be created ...
	I1019 17:13:48.834548  228157 default_sa.go:45] found service account: "default"
	I1019 17:13:48.834577  228157 default_sa.go:55] duration metric: took 2.844979ms for default service account to be created ...
	I1019 17:13:48.834588  228157 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 17:13:48.838165  228157 system_pods.go:86] 8 kube-system pods found
	I1019 17:13:48.838195  228157 system_pods.go:89] "coredns-5dd5756b68-qdvcm" [02f42850-84fc-4535-a60e-e2fa878a54a3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:13:48.838202  228157 system_pods.go:89] "etcd-old-k8s-version-904967" [327496fa-418e-4496-bb02-ad0c8589bb43] Running
	I1019 17:13:48.838207  228157 system_pods.go:89] "kindnet-lh8rm" [d76c47ed-ecd9-4b78-ac32-f6bd8a848989] Running
	I1019 17:13:48.838211  228157 system_pods.go:89] "kube-apiserver-old-k8s-version-904967" [fb738bfd-165e-416d-8ae0-e82db6a6574c] Running
	I1019 17:13:48.838217  228157 system_pods.go:89] "kube-controller-manager-old-k8s-version-904967" [3e3ec032-e8bf-46a3-b21e-19df08813e26] Running
	I1019 17:13:48.838221  228157 system_pods.go:89] "kube-proxy-gr6m9" [a804e301-f032-4125-9a56-00a958db2a49] Running
	I1019 17:13:48.838224  228157 system_pods.go:89] "kube-scheduler-old-k8s-version-904967" [14562e32-ca16-49c7-8c82-77ef0f687921] Running
	I1019 17:13:48.838228  228157 system_pods.go:89] "storage-provisioner" [0ec8b184-a07e-4609-9c21-00812610abb6] Running
	I1019 17:13:48.838234  228157 system_pods.go:126] duration metric: took 3.641173ms to wait for k8s-apps to be running ...
	I1019 17:13:48.838243  228157 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 17:13:48.838286  228157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:13:48.852869  228157 system_svc.go:56] duration metric: took 14.613954ms WaitForService to wait for kubelet
	I1019 17:13:48.852900  228157 kubeadm.go:587] duration metric: took 14.100132274s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 17:13:48.852925  228157 node_conditions.go:102] verifying NodePressure condition ...
	I1019 17:13:48.856170  228157 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1019 17:13:48.856201  228157 node_conditions.go:123] node cpu capacity is 8
	I1019 17:13:48.856217  228157 node_conditions.go:105] duration metric: took 3.2862ms to run NodePressure ...
	I1019 17:13:48.856233  228157 start.go:242] waiting for startup goroutines ...
	I1019 17:13:48.856244  228157 start.go:247] waiting for cluster config update ...
	I1019 17:13:48.856258  228157 start.go:256] writing updated cluster config ...
	I1019 17:13:48.856595  228157 ssh_runner.go:195] Run: rm -f paused
	I1019 17:13:48.860929  228157 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 17:13:48.865830  228157 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-qdvcm" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:13:49.875005  228157 pod_ready.go:94] pod "coredns-5dd5756b68-qdvcm" is "Ready"
	I1019 17:13:49.875039  228157 pod_ready.go:86] duration metric: took 1.009176963s for pod "coredns-5dd5756b68-qdvcm" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:13:49.878229  228157 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-904967" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:13:49.882958  228157 pod_ready.go:94] pod "etcd-old-k8s-version-904967" is "Ready"
	I1019 17:13:49.882988  228157 pod_ready.go:86] duration metric: took 4.736998ms for pod "etcd-old-k8s-version-904967" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:13:49.886300  228157 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-904967" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:13:49.890945  228157 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-904967" is "Ready"
	I1019 17:13:49.890974  228157 pod_ready.go:86] duration metric: took 4.65018ms for pod "kube-apiserver-old-k8s-version-904967" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:13:49.893658  228157 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-904967" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:13:50.070542  228157 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-904967" is "Ready"
	I1019 17:13:50.070576  228157 pod_ready.go:86] duration metric: took 176.900171ms for pod "kube-controller-manager-old-k8s-version-904967" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:13:50.270688  228157 pod_ready.go:83] waiting for pod "kube-proxy-gr6m9" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:13:50.670041  228157 pod_ready.go:94] pod "kube-proxy-gr6m9" is "Ready"
	I1019 17:13:50.670089  228157 pod_ready.go:86] duration metric: took 399.371436ms for pod "kube-proxy-gr6m9" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:13:50.871189  228157 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-904967" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:13:51.270897  228157 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-904967" is "Ready"
	I1019 17:13:51.270925  228157 pod_ready.go:86] duration metric: took 399.705402ms for pod "kube-scheduler-old-k8s-version-904967" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:13:51.270939  228157 pod_ready.go:40] duration metric: took 2.409965776s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 17:13:51.321617  228157 start.go:628] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1019 17:13:51.419220  228157 out.go:203] 
	W1019 17:13:51.466749  228157 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1019 17:13:51.540001  228157 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1019 17:13:51.542496  228157 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-904967" cluster and "default" namespace by default
	I1019 17:13:51.685959  234083 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (3.533891976s)
	I1019 17:13:51.685993  234083 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21683-3731/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1019 17:13:51.686035  234083 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1019 17:13:51.686099  234083 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1019 17:13:52.275615  234083 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21683-3731/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1019 17:13:52.275671  234083 cache_images.go:125] Successfully loaded all cached images
	I1019 17:13:52.275678  234083 cache_images.go:94] duration metric: took 11.237195538s to LoadCachedImages
	I1019 17:13:52.275697  234083 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1019 17:13:52.275823  234083 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-806996 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-806996 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 17:13:52.275897  234083 ssh_runner.go:195] Run: crio config
	I1019 17:13:52.326192  234083 cni.go:84] Creating CNI manager for ""
	I1019 17:13:52.326215  234083 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:13:52.326232  234083 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 17:13:52.326269  234083 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-806996 NodeName:no-preload-806996 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 17:13:52.326406  234083 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-806996"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 17:13:52.326482  234083 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 17:13:52.335328  234083 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1019 17:13:52.335390  234083 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1019 17:13:52.343889  234083 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
	I1019 17:13:52.343961  234083 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21683-3731/.minikube/cache/linux/amd64/v1.34.1/kubeadm
	I1019 17:13:52.343960  234083 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/21683-3731/.minikube/cache/linux/amd64/v1.34.1/kubelet
	I1019 17:13:52.343970  234083 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1019 17:13:52.348893  234083 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1019 17:13:52.348927  234083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/cache/linux/amd64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (60559544 bytes)
	I1019 17:13:53.180106  234083 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:13:53.194321  234083 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1019 17:13:53.199729  234083 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1019 17:13:53.199762  234083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/cache/linux/amd64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (59195684 bytes)
	I1019 17:13:53.362167  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1019 17:13:53.362268  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1019 17:13:53.362324  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1019 17:13:53.390277  219832 cri.go:89] found id: "f0e30211e9bc4bf09df46743958d90e765ea8f87bb974a2cd7fad5a9df90b0ca"
	I1019 17:13:53.390302  219832 cri.go:89] found id: "4379237a6a992d0971da96f5ae580fa977b7f5ae05e9ffa12df83b527ec9d4cc"
	I1019 17:13:53.390308  219832 cri.go:89] found id: ""
	I1019 17:13:53.390317  219832 logs.go:282] 2 containers: [f0e30211e9bc4bf09df46743958d90e765ea8f87bb974a2cd7fad5a9df90b0ca 4379237a6a992d0971da96f5ae580fa977b7f5ae05e9ffa12df83b527ec9d4cc]
	I1019 17:13:53.390364  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:13:53.394398  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:13:53.398117  219832 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1019 17:13:53.398183  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1019 17:13:53.425932  219832 cri.go:89] found id: ""
	I1019 17:13:53.425961  219832 logs.go:282] 0 containers: []
	W1019 17:13:53.425973  219832 logs.go:284] No container was found matching "etcd"
	I1019 17:13:53.425980  219832 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1019 17:13:53.426035  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1019 17:13:53.455212  219832 cri.go:89] found id: ""
	I1019 17:13:53.455239  219832 logs.go:282] 0 containers: []
	W1019 17:13:53.455250  219832 logs.go:284] No container was found matching "coredns"
	I1019 17:13:53.455257  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1019 17:13:53.455318  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1019 17:13:53.483582  219832 cri.go:89] found id: "409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:13:53.483611  219832 cri.go:89] found id: ""
	I1019 17:13:53.483621  219832 logs.go:282] 1 containers: [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f]
	I1019 17:13:53.483696  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:13:53.487758  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1019 17:13:53.487834  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1019 17:13:53.557193  234083 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1019 17:13:53.561368  234083 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1019 17:13:53.561406  234083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/cache/linux/amd64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (74027192 bytes)
	I1019 17:13:53.747532  234083 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 17:13:53.756949  234083 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1019 17:13:53.772313  234083 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 17:13:53.788158  234083 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1019 17:13:53.802427  234083 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1019 17:13:53.806723  234083 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:13:53.817923  234083 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:13:53.901661  234083 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:13:53.925932  234083 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/no-preload-806996 for IP: 192.168.76.2
	I1019 17:13:53.925956  234083 certs.go:195] generating shared ca certs ...
	I1019 17:13:53.925976  234083 certs.go:227] acquiring lock for ca certs: {Name:mk4c4a2dfd94a54e3626e99fce4b6f5183eeaf4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:13:53.926160  234083 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-3731/.minikube/ca.key
	I1019 17:13:53.926222  234083 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-3731/.minikube/proxy-client-ca.key
	I1019 17:13:53.926235  234083 certs.go:257] generating profile certs ...
	I1019 17:13:53.926301  234083 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/no-preload-806996/client.key
	I1019 17:13:53.926317  234083 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/no-preload-806996/client.crt with IP's: []
	I1019 17:13:54.107340  234083 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/no-preload-806996/client.crt ...
	I1019 17:13:54.107369  234083 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/no-preload-806996/client.crt: {Name:mk6bb4eb725d5c9ee55438555d19a89afd76b82b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:13:54.107554  234083 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/no-preload-806996/client.key ...
	I1019 17:13:54.107566  234083 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/no-preload-806996/client.key: {Name:mk934cf1977cb6a475c919cb165720b5e7981729 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:13:54.107664  234083 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/no-preload-806996/apiserver.key.d2617c27
	I1019 17:13:54.107679  234083 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/no-preload-806996/apiserver.crt.d2617c27 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1019 17:13:54.747194  234083 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/no-preload-806996/apiserver.crt.d2617c27 ...
	I1019 17:13:54.747226  234083 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/no-preload-806996/apiserver.crt.d2617c27: {Name:mk83a4d35c5ee807fd675e2ce5837f2d4166c986 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:13:54.747424  234083 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/no-preload-806996/apiserver.key.d2617c27 ...
	I1019 17:13:54.747443  234083 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/no-preload-806996/apiserver.key.d2617c27: {Name:mk91d7067fc4b54af702443f4959766d7df64368 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:13:54.747552  234083 certs.go:382] copying /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/no-preload-806996/apiserver.crt.d2617c27 -> /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/no-preload-806996/apiserver.crt
	I1019 17:13:54.747672  234083 certs.go:386] copying /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/no-preload-806996/apiserver.key.d2617c27 -> /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/no-preload-806996/apiserver.key
	I1019 17:13:54.747771  234083 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/no-preload-806996/proxy-client.key
	I1019 17:13:54.747797  234083 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/no-preload-806996/proxy-client.crt with IP's: []
	I1019 17:13:55.002962  234083 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/no-preload-806996/proxy-client.crt ...
	I1019 17:13:55.002995  234083 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/no-preload-806996/proxy-client.crt: {Name:mkf9b54d81b6f4a732e58302938123365024405d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:13:55.003212  234083 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/no-preload-806996/proxy-client.key ...
	I1019 17:13:55.003232  234083 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/no-preload-806996/proxy-client.key: {Name:mk073a9ae31739c418a21063c714563ce0a2e9bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:13:55.003452  234083 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/7228.pem (1338 bytes)
	W1019 17:13:55.003508  234083 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-3731/.minikube/certs/7228_empty.pem, impossibly tiny 0 bytes
	I1019 17:13:55.003524  234083 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca-key.pem (1675 bytes)
	I1019 17:13:55.003557  234083 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem (1082 bytes)
	I1019 17:13:55.003604  234083 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/cert.pem (1123 bytes)
	I1019 17:13:55.003642  234083 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/key.pem (1679 bytes)
	I1019 17:13:55.003700  234083 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/files/etc/ssl/certs/72282.pem (1708 bytes)
	I1019 17:13:55.004336  234083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 17:13:55.024214  234083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1019 17:13:55.042998  234083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 17:13:55.061903  234083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1019 17:13:55.081381  234083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/no-preload-806996/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1019 17:13:55.101172  234083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/no-preload-806996/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1019 17:13:55.119442  234083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/no-preload-806996/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 17:13:55.137387  234083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/no-preload-806996/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1019 17:13:55.155658  234083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 17:13:55.175493  234083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/certs/7228.pem --> /usr/share/ca-certificates/7228.pem (1338 bytes)
	I1019 17:13:55.194709  234083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/files/etc/ssl/certs/72282.pem --> /usr/share/ca-certificates/72282.pem (1708 bytes)
	I1019 17:13:55.212989  234083 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 17:13:55.226486  234083 ssh_runner.go:195] Run: openssl version
	I1019 17:13:55.232983  234083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 17:13:55.242153  234083 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:13:55.246194  234083 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 16:21 /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:13:55.246252  234083 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:13:55.282323  234083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 17:13:55.291815  234083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7228.pem && ln -fs /usr/share/ca-certificates/7228.pem /etc/ssl/certs/7228.pem"
	I1019 17:13:55.301145  234083 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7228.pem
	I1019 17:13:55.305621  234083 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 16:26 /usr/share/ca-certificates/7228.pem
	I1019 17:13:55.305683  234083 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7228.pem
	I1019 17:13:55.341355  234083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7228.pem /etc/ssl/certs/51391683.0"
	I1019 17:13:55.351180  234083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/72282.pem && ln -fs /usr/share/ca-certificates/72282.pem /etc/ssl/certs/72282.pem"
	I1019 17:13:55.360333  234083 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/72282.pem
	I1019 17:13:55.364428  234083 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 16:26 /usr/share/ca-certificates/72282.pem
	I1019 17:13:55.364482  234083 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/72282.pem
	I1019 17:13:55.400995  234083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/72282.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 17:13:55.411054  234083 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 17:13:55.415398  234083 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1019 17:13:55.415466  234083 kubeadm.go:401] StartCluster: {Name:no-preload-806996 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-806996 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:13:55.415561  234083 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 17:13:55.415626  234083 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 17:13:55.445309  234083 cri.go:89] found id: ""
	I1019 17:13:55.445380  234083 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 17:13:55.454417  234083 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1019 17:13:55.463178  234083 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1019 17:13:55.463247  234083 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1019 17:13:55.472933  234083 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1019 17:13:55.472953  234083 kubeadm.go:158] found existing configuration files:
	
	I1019 17:13:55.473015  234083 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1019 17:13:55.481143  234083 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1019 17:13:55.481209  234083 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1019 17:13:55.489218  234083 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1019 17:13:55.497445  234083 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1019 17:13:55.497499  234083 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1019 17:13:55.505631  234083 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1019 17:13:55.513651  234083 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1019 17:13:55.513722  234083 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1019 17:13:55.521433  234083 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1019 17:13:55.529661  234083 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1019 17:13:55.529727  234083 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1019 17:13:55.537643  234083 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1019 17:13:55.596431  234083 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1019 17:13:55.655190  234083 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1019 17:13:53.516234  219832 cri.go:89] found id: ""
	I1019 17:13:53.516258  219832 logs.go:282] 0 containers: []
	W1019 17:13:53.516265  219832 logs.go:284] No container was found matching "kube-proxy"
	I1019 17:13:53.516271  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1019 17:13:53.516332  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1019 17:13:53.545150  219832 cri.go:89] found id: "5fcaafbb6b2ff370f143bccc65a03c3cc8c597c0a3ccb77051d8f843f143626f"
	I1019 17:13:53.545177  219832 cri.go:89] found id: "a804d6a79a2786825f0f4c7fd6547d3a0952cd13888056690ff445193dc544cc"
	I1019 17:13:53.545183  219832 cri.go:89] found id: ""
	I1019 17:13:53.545192  219832 logs.go:282] 2 containers: [5fcaafbb6b2ff370f143bccc65a03c3cc8c597c0a3ccb77051d8f843f143626f a804d6a79a2786825f0f4c7fd6547d3a0952cd13888056690ff445193dc544cc]
	I1019 17:13:53.545240  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:13:53.549529  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:13:53.553533  219832 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1019 17:13:53.553612  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1019 17:13:53.583569  219832 cri.go:89] found id: ""
	I1019 17:13:53.583602  219832 logs.go:282] 0 containers: []
	W1019 17:13:53.583615  219832 logs.go:284] No container was found matching "kindnet"
	I1019 17:13:53.583623  219832 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1019 17:13:53.583693  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1019 17:13:53.616145  219832 cri.go:89] found id: ""
	I1019 17:13:53.616178  219832 logs.go:282] 0 containers: []
	W1019 17:13:53.616191  219832 logs.go:284] No container was found matching "storage-provisioner"
	I1019 17:13:53.616211  219832 logs.go:123] Gathering logs for kubelet ...
	I1019 17:13:53.616234  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1019 17:13:53.704451  219832 logs.go:123] Gathering logs for describe nodes ...
	I1019 17:13:53.704488  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	
	
	==> CRI-O <==
	Oct 19 17:13:48 old-k8s-version-904967 crio[775]: time="2025-10-19T17:13:48.722609499Z" level=info msg="Starting container: 0f8041e734561c71ae61e9cf39a4276aa8522082cc85bdcaa30e7e0c3a38ed1c" id=0de8d9e2-4207-4656-8105-b3aaf761252d name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 17:13:48 old-k8s-version-904967 crio[775]: time="2025-10-19T17:13:48.724483779Z" level=info msg="Started container" PID=2138 containerID=0f8041e734561c71ae61e9cf39a4276aa8522082cc85bdcaa30e7e0c3a38ed1c description=kube-system/coredns-5dd5756b68-qdvcm/coredns id=0de8d9e2-4207-4656-8105-b3aaf761252d name=/runtime.v1.RuntimeService/StartContainer sandboxID=f34e596533dd6d06201bcd25a04c505578e8da30f87a779a5ea39621d0106900
	Oct 19 17:13:52 old-k8s-version-904967 crio[775]: time="2025-10-19T17:13:52.01124841Z" level=info msg="Running pod sandbox: default/busybox/POD" id=6bf45058-e442-4aa9-99e1-8b5b5e4b616f name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 17:13:52 old-k8s-version-904967 crio[775]: time="2025-10-19T17:13:52.011346585Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:13:52 old-k8s-version-904967 crio[775]: time="2025-10-19T17:13:52.016821248Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:f05a72fc2f1b0fc79ee649a2e747894ec76f22d1504f23415f76d57069f799cc UID:f8226db2-996c-424a-b64b-99ee92815957 NetNS:/var/run/netns/d5d7c12a-3637-4e37-86fa-f3c1f4df436e Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000d971a0}] Aliases:map[]}"
	Oct 19 17:13:52 old-k8s-version-904967 crio[775]: time="2025-10-19T17:13:52.016860554Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 19 17:13:52 old-k8s-version-904967 crio[775]: time="2025-10-19T17:13:52.028261449Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:f05a72fc2f1b0fc79ee649a2e747894ec76f22d1504f23415f76d57069f799cc UID:f8226db2-996c-424a-b64b-99ee92815957 NetNS:/var/run/netns/d5d7c12a-3637-4e37-86fa-f3c1f4df436e Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000d971a0}] Aliases:map[]}"
	Oct 19 17:13:52 old-k8s-version-904967 crio[775]: time="2025-10-19T17:13:52.028410997Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 19 17:13:52 old-k8s-version-904967 crio[775]: time="2025-10-19T17:13:52.029241477Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 19 17:13:52 old-k8s-version-904967 crio[775]: time="2025-10-19T17:13:52.030452921Z" level=info msg="Ran pod sandbox f05a72fc2f1b0fc79ee649a2e747894ec76f22d1504f23415f76d57069f799cc with infra container: default/busybox/POD" id=6bf45058-e442-4aa9-99e1-8b5b5e4b616f name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 17:13:52 old-k8s-version-904967 crio[775]: time="2025-10-19T17:13:52.031851581Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4cce93e7-ae9b-4205-a2c9-6e66e1b3f1a5 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:13:52 old-k8s-version-904967 crio[775]: time="2025-10-19T17:13:52.031991895Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=4cce93e7-ae9b-4205-a2c9-6e66e1b3f1a5 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:13:52 old-k8s-version-904967 crio[775]: time="2025-10-19T17:13:52.032035658Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=4cce93e7-ae9b-4205-a2c9-6e66e1b3f1a5 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:13:52 old-k8s-version-904967 crio[775]: time="2025-10-19T17:13:52.032637295Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4c489c3e-0647-45f1-b730-d34acd2ddc99 name=/runtime.v1.ImageService/PullImage
	Oct 19 17:13:52 old-k8s-version-904967 crio[775]: time="2025-10-19T17:13:52.034382386Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 19 17:13:52 old-k8s-version-904967 crio[775]: time="2025-10-19T17:13:52.763284596Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=4c489c3e-0647-45f1-b730-d34acd2ddc99 name=/runtime.v1.ImageService/PullImage
	Oct 19 17:13:52 old-k8s-version-904967 crio[775]: time="2025-10-19T17:13:52.764300927Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f7910563-240f-4e39-bd48-ae7d35e138b2 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:13:52 old-k8s-version-904967 crio[775]: time="2025-10-19T17:13:52.766663301Z" level=info msg="Creating container: default/busybox/busybox" id=529be480-7334-4d62-8059-f82012d8b82f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:13:52 old-k8s-version-904967 crio[775]: time="2025-10-19T17:13:52.767464291Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:13:52 old-k8s-version-904967 crio[775]: time="2025-10-19T17:13:52.772556854Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:13:52 old-k8s-version-904967 crio[775]: time="2025-10-19T17:13:52.773206847Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:13:52 old-k8s-version-904967 crio[775]: time="2025-10-19T17:13:52.797971144Z" level=info msg="Created container 4f1a59231d6d82a1cdcbb0f809b0babc0debf224f16c3bc26f32d9743c65ccb0: default/busybox/busybox" id=529be480-7334-4d62-8059-f82012d8b82f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:13:52 old-k8s-version-904967 crio[775]: time="2025-10-19T17:13:52.798748134Z" level=info msg="Starting container: 4f1a59231d6d82a1cdcbb0f809b0babc0debf224f16c3bc26f32d9743c65ccb0" id=c9b8e437-8686-4b5f-855c-b01709647708 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 17:13:52 old-k8s-version-904967 crio[775]: time="2025-10-19T17:13:52.801109895Z" level=info msg="Started container" PID=2218 containerID=4f1a59231d6d82a1cdcbb0f809b0babc0debf224f16c3bc26f32d9743c65ccb0 description=default/busybox/busybox id=c9b8e437-8686-4b5f-855c-b01709647708 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f05a72fc2f1b0fc79ee649a2e747894ec76f22d1504f23415f76d57069f799cc
	Oct 19 17:14:00 old-k8s-version-904967 crio[775]: time="2025-10-19T17:14:00.819593654Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	4f1a59231d6d8       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   9 seconds ago       Running             busybox                   0                   f05a72fc2f1b0       busybox                                          default
	0f8041e734561       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 seconds ago      Running             coredns                   0                   f34e596533dd6       coredns-5dd5756b68-qdvcm                         kube-system
	e4c5d6afd3b51       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 seconds ago      Running             storage-provisioner       0                   0cff5716d5267       storage-provisioner                              kube-system
	7feae8ecf6637       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    24 seconds ago      Running             kindnet-cni               0                   d8fefe6d0dbda       kindnet-lh8rm                                    kube-system
	3950e2bee2546       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      26 seconds ago      Running             kube-proxy                0                   e822f8d0393ed       kube-proxy-gr6m9                                 kube-system
	ae622b8371e1c       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      45 seconds ago      Running             etcd                      0                   89f1cfba9bf42       etcd-old-k8s-version-904967                      kube-system
	5adc56172cf32       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      45 seconds ago      Running             kube-scheduler            0                   efbbf2022ec08       kube-scheduler-old-k8s-version-904967            kube-system
	a0349cdc17a23       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      45 seconds ago      Running             kube-controller-manager   0                   453e1b05488f6       kube-controller-manager-old-k8s-version-904967   kube-system
	f4cddadd70a9c       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      45 seconds ago      Running             kube-apiserver            0                   102d6ccb088f9       kube-apiserver-old-k8s-version-904967            kube-system
	
	
	==> coredns [0f8041e734561c71ae61e9cf39a4276aa8522082cc85bdcaa30e7e0c3a38ed1c] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:49935 - 23730 "HINFO IN 8493094700721446701.7392202249581327913. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.454478829s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-904967
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-904967
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34
	                    minikube.k8s.io/name=old-k8s-version-904967
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T17_13_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 17:13:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-904967
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 17:14:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 17:13:52 +0000   Sun, 19 Oct 2025 17:13:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 17:13:52 +0000   Sun, 19 Oct 2025 17:13:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 17:13:52 +0000   Sun, 19 Oct 2025 17:13:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 17:13:52 +0000   Sun, 19 Oct 2025 17:13:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-904967
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                1f7bd5b5-08c8-4ce1-be37-64fa8f96d211
	  Boot ID:                    74c6737e-3c93-45ee-b810-4373d2d70b9b
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-5dd5756b68-qdvcm                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     28s
	  kube-system                 etcd-old-k8s-version-904967                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         41s
	  kube-system                 kindnet-lh8rm                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-old-k8s-version-904967             250m (3%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-controller-manager-old-k8s-version-904967    200m (2%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-proxy-gr6m9                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-old-k8s-version-904967             100m (1%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 26s   kube-proxy       
	  Normal  Starting                 41s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  41s   kubelet          Node old-k8s-version-904967 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s   kubelet          Node old-k8s-version-904967 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s   kubelet          Node old-k8s-version-904967 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s   node-controller  Node old-k8s-version-904967 event: Registered Node old-k8s-version-904967 in Controller
	  Normal  NodeReady                14s   kubelet          Node old-k8s-version-904967 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.102617] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027869] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.458770] kauditd_printk_skb: 47 callbacks suppressed
	[Oct19 16:23] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.054620] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023908] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023878] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023848] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023943] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +2.047764] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +4.031517] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +8.127172] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[ +16.382276] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[Oct19 16:24] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	
	
	==> etcd [ae622b8371e1c32ba94db0acfa27fd62149dc9487d15df5351d7d860e314a55e] <==
	{"level":"info","ts":"2025-10-19T17:13:17.028573Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-10-19T17:13:17.029921Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-19T17:13:17.03006Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-19T17:13:17.030119Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-19T17:13:17.030287Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-19T17:13:17.030335Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-19T17:13:17.11541Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-10-19T17:13:17.115585Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-10-19T17:13:17.115629Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-10-19T17:13:17.115654Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-10-19T17:13:17.115664Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-19T17:13:17.115677Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-10-19T17:13:17.11569Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-19T17:13:17.116684Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-19T17:13:17.117686Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-904967 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-19T17:13:17.117862Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-19T17:13:17.117915Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-19T17:13:17.119353Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-19T17:13:17.119412Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-19T17:13:17.120134Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-19T17:13:17.120272Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-19T17:13:17.120308Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-19T17:13:17.120372Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-19T17:13:17.120389Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-10-19T17:13:35.741486Z","caller":"traceutil/trace.go:171","msg":"trace[2005299982] transaction","detail":"{read_only:false; response_revision:375; number_of_response:1; }","duration":"113.646941ms","start":"2025-10-19T17:13:35.627815Z","end":"2025-10-19T17:13:35.741462Z","steps":["trace[2005299982] 'process raft request'  (duration: 113.499317ms)"],"step_count":1}
	
	
	==> kernel <==
	 17:14:02 up 56 min,  0 user,  load average: 3.53, 2.79, 1.60
	Linux old-k8s-version-904967 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7feae8ecf6637e7f9a0e3153a4ad01a02a5f0d4f51656ecc849994e6aa55c5a8] <==
	I1019 17:13:37.619056       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 17:13:37.667541       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1019 17:13:37.667683       1 main.go:148] setting mtu 1500 for CNI 
	I1019 17:13:37.667705       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 17:13:37.667735       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T17:13:37Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 17:13:37.916476       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 17:13:37.916565       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 17:13:37.916579       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 17:13:37.916846       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1019 17:13:38.267256       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 17:13:38.267287       1 metrics.go:72] Registering metrics
	I1019 17:13:38.267363       1 controller.go:711] "Syncing nftables rules"
	I1019 17:13:47.876205       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 17:13:47.876264       1 main.go:301] handling current node
	I1019 17:13:57.873185       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 17:13:57.873242       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f4cddadd70a9c209b36d5fce2e575304c443df75fe7fce53b2bb842539c837de] <==
	I1019 17:13:18.511619       1 aggregator.go:166] initial CRD sync complete...
	I1019 17:13:18.511633       1 autoregister_controller.go:141] Starting autoregister controller
	I1019 17:13:18.511643       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1019 17:13:18.511650       1 cache.go:39] Caches are synced for autoregister controller
	I1019 17:13:18.511654       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1019 17:13:18.512057       1 shared_informer.go:318] Caches are synced for configmaps
	I1019 17:13:18.512336       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1019 17:13:18.512417       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1019 17:13:18.518770       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1019 17:13:18.539884       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 17:13:19.410498       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1019 17:13:19.414442       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1019 17:13:19.414461       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 17:13:19.874890       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 17:13:19.914176       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 17:13:20.016740       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1019 17:13:20.023165       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1019 17:13:20.024486       1 controller.go:624] quota admission added evaluator for: endpoints
	I1019 17:13:20.030568       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 17:13:20.448802       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1019 17:13:21.627190       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1019 17:13:21.637712       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1019 17:13:21.650897       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1019 17:13:34.740411       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1019 17:13:34.917728       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [a0349cdc17a23307b2d64cba9ae13c5a68cb68bd1bb461e7985aed79adea2dfe] <==
	I1019 17:13:34.848923       1 shared_informer.go:318] Caches are synced for disruption
	I1019 17:13:34.897642       1 shared_informer.go:318] Caches are synced for deployment
	I1019 17:13:34.899027       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I1019 17:13:34.911880       1 shared_informer.go:318] Caches are synced for resource quota
	I1019 17:13:34.925820       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1019 17:13:34.958203       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-8l6zh"
	I1019 17:13:34.979171       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-qdvcm"
	I1019 17:13:34.999818       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="76.397285ms"
	I1019 17:13:35.013614       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="13.455104ms"
	I1019 17:13:35.013835       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="91.042µs"
	I1019 17:13:35.021370       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="839.079µs"
	I1019 17:13:35.241209       1 shared_informer.go:318] Caches are synced for garbage collector
	I1019 17:13:35.247396       1 shared_informer.go:318] Caches are synced for garbage collector
	I1019 17:13:35.247438       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1019 17:13:35.337209       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1019 17:13:35.356360       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-8l6zh"
	I1019 17:13:35.370200       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="33.805712ms"
	I1019 17:13:35.379841       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.564959ms"
	I1019 17:13:35.380795       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="120.132µs"
	I1019 17:13:48.364309       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="239.796µs"
	I1019 17:13:48.378166       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="150.972µs"
	I1019 17:13:48.799366       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="110.214µs"
	I1019 17:13:49.699508       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1019 17:13:49.815127       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.339325ms"
	I1019 17:13:49.815346       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="90.8µs"
	
	
	==> kube-proxy [3950e2bee2546057471dd8c9aa3a73c36300b9dbad60628436f7cbbd3ca39bf3] <==
	I1019 17:13:35.507482       1 server_others.go:69] "Using iptables proxy"
	I1019 17:13:35.519644       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1019 17:13:35.544623       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 17:13:35.547982       1 server_others.go:152] "Using iptables Proxier"
	I1019 17:13:35.548042       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1019 17:13:35.548053       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1019 17:13:35.548103       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1019 17:13:35.548449       1 server.go:846] "Version info" version="v1.28.0"
	I1019 17:13:35.548475       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:13:35.550043       1 config.go:188] "Starting service config controller"
	I1019 17:13:35.550207       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1019 17:13:35.550268       1 config.go:97] "Starting endpoint slice config controller"
	I1019 17:13:35.550292       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1019 17:13:35.551039       1 config.go:315] "Starting node config controller"
	I1019 17:13:35.554569       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1019 17:13:35.651144       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1019 17:13:35.651162       1 shared_informer.go:318] Caches are synced for service config
	I1019 17:13:35.654666       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [5adc56172cf322fce6b7a5d2d35acd1b1a248eb14413b467d9260dc6adae9145] <==
	W1019 17:13:18.465823       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1019 17:13:18.465883       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1019 17:13:18.466201       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1019 17:13:18.466242       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1019 17:13:19.329567       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1019 17:13:19.329600       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1019 17:13:19.373618       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1019 17:13:19.373650       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1019 17:13:19.506014       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1019 17:13:19.506061       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1019 17:13:19.559671       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1019 17:13:19.559711       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1019 17:13:19.574307       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1019 17:13:19.574346       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1019 17:13:19.575222       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1019 17:13:19.575254       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1019 17:13:19.639106       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1019 17:13:19.639155       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1019 17:13:19.656582       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1019 17:13:19.656620       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1019 17:13:19.673601       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1019 17:13:19.673644       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1019 17:13:19.690233       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1019 17:13:19.690271       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I1019 17:13:22.453496       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 19 17:13:34 old-k8s-version-904967 kubelet[1380]: I1019 17:13:34.792565    1380 topology_manager.go:215] "Topology Admit Handler" podUID="d76c47ed-ecd9-4b78-ac32-f6bd8a848989" podNamespace="kube-system" podName="kindnet-lh8rm"
	Oct 19 17:13:34 old-k8s-version-904967 kubelet[1380]: I1019 17:13:34.821342    1380 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 19 17:13:34 old-k8s-version-904967 kubelet[1380]: I1019 17:13:34.823481    1380 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 19 17:13:34 old-k8s-version-904967 kubelet[1380]: I1019 17:13:34.982528    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a804e301-f032-4125-9a56-00a958db2a49-kube-proxy\") pod \"kube-proxy-gr6m9\" (UID: \"a804e301-f032-4125-9a56-00a958db2a49\") " pod="kube-system/kube-proxy-gr6m9"
	Oct 19 17:13:34 old-k8s-version-904967 kubelet[1380]: I1019 17:13:34.982855    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d76c47ed-ecd9-4b78-ac32-f6bd8a848989-lib-modules\") pod \"kindnet-lh8rm\" (UID: \"d76c47ed-ecd9-4b78-ac32-f6bd8a848989\") " pod="kube-system/kindnet-lh8rm"
	Oct 19 17:13:34 old-k8s-version-904967 kubelet[1380]: I1019 17:13:34.983017    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvqbr\" (UniqueName: \"kubernetes.io/projected/d76c47ed-ecd9-4b78-ac32-f6bd8a848989-kube-api-access-lvqbr\") pod \"kindnet-lh8rm\" (UID: \"d76c47ed-ecd9-4b78-ac32-f6bd8a848989\") " pod="kube-system/kindnet-lh8rm"
	Oct 19 17:13:34 old-k8s-version-904967 kubelet[1380]: I1019 17:13:34.983419    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a804e301-f032-4125-9a56-00a958db2a49-lib-modules\") pod \"kube-proxy-gr6m9\" (UID: \"a804e301-f032-4125-9a56-00a958db2a49\") " pod="kube-system/kube-proxy-gr6m9"
	Oct 19 17:13:34 old-k8s-version-904967 kubelet[1380]: I1019 17:13:34.983588    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/d76c47ed-ecd9-4b78-ac32-f6bd8a848989-cni-cfg\") pod \"kindnet-lh8rm\" (UID: \"d76c47ed-ecd9-4b78-ac32-f6bd8a848989\") " pod="kube-system/kindnet-lh8rm"
	Oct 19 17:13:34 old-k8s-version-904967 kubelet[1380]: I1019 17:13:34.986177    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc8ks\" (UniqueName: \"kubernetes.io/projected/a804e301-f032-4125-9a56-00a958db2a49-kube-api-access-rc8ks\") pod \"kube-proxy-gr6m9\" (UID: \"a804e301-f032-4125-9a56-00a958db2a49\") " pod="kube-system/kube-proxy-gr6m9"
	Oct 19 17:13:34 old-k8s-version-904967 kubelet[1380]: I1019 17:13:34.986227    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a804e301-f032-4125-9a56-00a958db2a49-xtables-lock\") pod \"kube-proxy-gr6m9\" (UID: \"a804e301-f032-4125-9a56-00a958db2a49\") " pod="kube-system/kube-proxy-gr6m9"
	Oct 19 17:13:34 old-k8s-version-904967 kubelet[1380]: I1019 17:13:34.986254    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d76c47ed-ecd9-4b78-ac32-f6bd8a848989-xtables-lock\") pod \"kindnet-lh8rm\" (UID: \"d76c47ed-ecd9-4b78-ac32-f6bd8a848989\") " pod="kube-system/kindnet-lh8rm"
	Oct 19 17:13:35 old-k8s-version-904967 kubelet[1380]: I1019 17:13:35.794390    1380 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-gr6m9" podStartSLOduration=1.794337329 podCreationTimestamp="2025-10-19 17:13:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:13:35.793702168 +0000 UTC m=+14.191892002" watchObservedRunningTime="2025-10-19 17:13:35.794337329 +0000 UTC m=+14.192527163"
	Oct 19 17:13:41 old-k8s-version-904967 kubelet[1380]: I1019 17:13:41.726662    1380 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-lh8rm" podStartSLOduration=5.7523955860000004 podCreationTimestamp="2025-10-19 17:13:34 +0000 UTC" firstStartedPulling="2025-10-19 17:13:35.405036398 +0000 UTC m=+13.803226227" lastFinishedPulling="2025-10-19 17:13:37.379245185 +0000 UTC m=+15.777435009" observedRunningTime="2025-10-19 17:13:37.769892654 +0000 UTC m=+16.168082486" watchObservedRunningTime="2025-10-19 17:13:41.726604368 +0000 UTC m=+20.124794200"
	Oct 19 17:13:48 old-k8s-version-904967 kubelet[1380]: I1019 17:13:48.338008    1380 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 19 17:13:48 old-k8s-version-904967 kubelet[1380]: I1019 17:13:48.364387    1380 topology_manager.go:215] "Topology Admit Handler" podUID="02f42850-84fc-4535-a60e-e2fa878a54a3" podNamespace="kube-system" podName="coredns-5dd5756b68-qdvcm"
	Oct 19 17:13:48 old-k8s-version-904967 kubelet[1380]: I1019 17:13:48.365781    1380 topology_manager.go:215] "Topology Admit Handler" podUID="0ec8b184-a07e-4609-9c21-00812610abb6" podNamespace="kube-system" podName="storage-provisioner"
	Oct 19 17:13:48 old-k8s-version-904967 kubelet[1380]: I1019 17:13:48.375485    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8k7f\" (UniqueName: \"kubernetes.io/projected/02f42850-84fc-4535-a60e-e2fa878a54a3-kube-api-access-w8k7f\") pod \"coredns-5dd5756b68-qdvcm\" (UID: \"02f42850-84fc-4535-a60e-e2fa878a54a3\") " pod="kube-system/coredns-5dd5756b68-qdvcm"
	Oct 19 17:13:48 old-k8s-version-904967 kubelet[1380]: I1019 17:13:48.375546    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/0ec8b184-a07e-4609-9c21-00812610abb6-tmp\") pod \"storage-provisioner\" (UID: \"0ec8b184-a07e-4609-9c21-00812610abb6\") " pod="kube-system/storage-provisioner"
	Oct 19 17:13:48 old-k8s-version-904967 kubelet[1380]: I1019 17:13:48.375580    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvnj6\" (UniqueName: \"kubernetes.io/projected/0ec8b184-a07e-4609-9c21-00812610abb6-kube-api-access-hvnj6\") pod \"storage-provisioner\" (UID: \"0ec8b184-a07e-4609-9c21-00812610abb6\") " pod="kube-system/storage-provisioner"
	Oct 19 17:13:48 old-k8s-version-904967 kubelet[1380]: I1019 17:13:48.375673    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02f42850-84fc-4535-a60e-e2fa878a54a3-config-volume\") pod \"coredns-5dd5756b68-qdvcm\" (UID: \"02f42850-84fc-4535-a60e-e2fa878a54a3\") " pod="kube-system/coredns-5dd5756b68-qdvcm"
	Oct 19 17:13:48 old-k8s-version-904967 kubelet[1380]: I1019 17:13:48.801925    1380 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-qdvcm" podStartSLOduration=14.801845455 podCreationTimestamp="2025-10-19 17:13:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:13:48.800578272 +0000 UTC m=+27.198768105" watchObservedRunningTime="2025-10-19 17:13:48.801845455 +0000 UTC m=+27.200035288"
	Oct 19 17:13:48 old-k8s-version-904967 kubelet[1380]: I1019 17:13:48.814504    1380 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.814440376 podCreationTimestamp="2025-10-19 17:13:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:13:48.814094297 +0000 UTC m=+27.212284127" watchObservedRunningTime="2025-10-19 17:13:48.814440376 +0000 UTC m=+27.212630254"
	Oct 19 17:13:51 old-k8s-version-904967 kubelet[1380]: I1019 17:13:51.709732    1380 topology_manager.go:215] "Topology Admit Handler" podUID="f8226db2-996c-424a-b64b-99ee92815957" podNamespace="default" podName="busybox"
	Oct 19 17:13:51 old-k8s-version-904967 kubelet[1380]: I1019 17:13:51.795642    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ppgr\" (UniqueName: \"kubernetes.io/projected/f8226db2-996c-424a-b64b-99ee92815957-kube-api-access-5ppgr\") pod \"busybox\" (UID: \"f8226db2-996c-424a-b64b-99ee92815957\") " pod="default/busybox"
	Oct 19 17:13:53 old-k8s-version-904967 kubelet[1380]: I1019 17:13:53.811865    1380 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=2.080437443 podCreationTimestamp="2025-10-19 17:13:51 +0000 UTC" firstStartedPulling="2025-10-19 17:13:52.032242526 +0000 UTC m=+30.430432352" lastFinishedPulling="2025-10-19 17:13:52.763622403 +0000 UTC m=+31.161812233" observedRunningTime="2025-10-19 17:13:53.811391958 +0000 UTC m=+32.209581792" watchObservedRunningTime="2025-10-19 17:13:53.811817324 +0000 UTC m=+32.210007156"
	
	
	==> storage-provisioner [e4c5d6afd3b51b643f021d90dfca9b2e8d64b5cbca6184959cf9bbf7c3f1ecb1] <==
	I1019 17:13:48.731654       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1019 17:13:48.741851       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1019 17:13:48.741901       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1019 17:13:48.750336       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1019 17:13:48.750419       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1eb69d94-a491-4ab8-b2b9-5d7636ed3c57", APIVersion:"v1", ResourceVersion:"405", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-904967_89e9dfdd-ffe4-48da-b252-ab08771de10d became leader
	I1019 17:13:48.750530       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-904967_89e9dfdd-ffe4-48da-b252-ab08771de10d!
	I1019 17:13:48.850877       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-904967_89e9dfdd-ffe4-48da-b252-ab08771de10d!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-904967 -n old-k8s-version-904967
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-904967 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.76s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-806996 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-806996 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (241.513111ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:14:34Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-806996 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-806996 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-806996 describe deploy/metrics-server -n kube-system: exit status 1 (61.388282ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-806996 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-806996
helpers_test.go:243: (dbg) docker inspect no-preload-806996:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2bbe9c0feed5aedbcb3c0e82084e03391100394ba6f66d3f42cedc2d4c4a5365",
	        "Created": "2025-10-19T17:13:34.261937795Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 234561,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T17:13:34.301458918Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/2bbe9c0feed5aedbcb3c0e82084e03391100394ba6f66d3f42cedc2d4c4a5365/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2bbe9c0feed5aedbcb3c0e82084e03391100394ba6f66d3f42cedc2d4c4a5365/hostname",
	        "HostsPath": "/var/lib/docker/containers/2bbe9c0feed5aedbcb3c0e82084e03391100394ba6f66d3f42cedc2d4c4a5365/hosts",
	        "LogPath": "/var/lib/docker/containers/2bbe9c0feed5aedbcb3c0e82084e03391100394ba6f66d3f42cedc2d4c4a5365/2bbe9c0feed5aedbcb3c0e82084e03391100394ba6f66d3f42cedc2d4c4a5365-json.log",
	        "Name": "/no-preload-806996",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-806996:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-806996",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2bbe9c0feed5aedbcb3c0e82084e03391100394ba6f66d3f42cedc2d4c4a5365",
	                "LowerDir": "/var/lib/docker/overlay2/6fc43257768a3bf4fe5dabf66ba1cda632762e15d5c29b3c95b7c6c08c654924-init/diff:/var/lib/docker/overlay2/0516a6de822f96a429e000bb6523437ca93a66a6aecaba561c6abf45d0d1defe/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6fc43257768a3bf4fe5dabf66ba1cda632762e15d5c29b3c95b7c6c08c654924/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6fc43257768a3bf4fe5dabf66ba1cda632762e15d5c29b3c95b7c6c08c654924/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6fc43257768a3bf4fe5dabf66ba1cda632762e15d5c29b3c95b7c6c08c654924/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-806996",
	                "Source": "/var/lib/docker/volumes/no-preload-806996/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-806996",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-806996",
	                "name.minikube.sigs.k8s.io": "no-preload-806996",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a3280f458d5698a03a03d7244fa996c75c46e72ac11f885eecf9f29762e05154",
	            "SandboxKey": "/var/run/docker/netns/a3280f458d56",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33059"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33060"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33061"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33062"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-806996": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:ba:6b:58:37:f2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "73bac96357aad3b7cfe938f1f5873c93097c59bb8fc57dcc5d67449be0149246",
	                    "EndpointID": "96d44db2e9e78e97b50af026a9844ee21397441d9608e6c49a22e5ef06133591",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-806996",
	                        "2bbe9c0feed5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-806996 -n no-preload-806996
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-806996 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-806996 logs -n 25: (1.014067854s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ stop    │ -p scheduled-stop-575331 --schedule 15s                                                                                                                                                                                                       │ scheduled-stop-575331     │ jenkins │ v1.37.0 │ 19 Oct 25 17:08 UTC │                     │
	│ stop    │ -p scheduled-stop-575331 --schedule 15s                                                                                                                                                                                                       │ scheduled-stop-575331     │ jenkins │ v1.37.0 │ 19 Oct 25 17:08 UTC │                     │
	│ stop    │ -p scheduled-stop-575331 --cancel-scheduled                                                                                                                                                                                                   │ scheduled-stop-575331     │ jenkins │ v1.37.0 │ 19 Oct 25 17:08 UTC │ 19 Oct 25 17:08 UTC │
	│ stop    │ -p scheduled-stop-575331 --schedule 15s                                                                                                                                                                                                       │ scheduled-stop-575331     │ jenkins │ v1.37.0 │ 19 Oct 25 17:09 UTC │                     │
	│ stop    │ -p scheduled-stop-575331 --schedule 15s                                                                                                                                                                                                       │ scheduled-stop-575331     │ jenkins │ v1.37.0 │ 19 Oct 25 17:09 UTC │                     │
	│ stop    │ -p scheduled-stop-575331 --schedule 15s                                                                                                                                                                                                       │ scheduled-stop-575331     │ jenkins │ v1.37.0 │ 19 Oct 25 17:09 UTC │ 19 Oct 25 17:09 UTC │
	│ delete  │ -p scheduled-stop-575331                                                                                                                                                                                                                      │ scheduled-stop-575331     │ jenkins │ v1.37.0 │ 19 Oct 25 17:10 UTC │ 19 Oct 25 17:10 UTC │
	│ stop    │ -p kubernetes-upgrade-318879                                                                                                                                                                                                                  │ kubernetes-upgrade-318879 │ jenkins │ v1.37.0 │ 19 Oct 25 17:12 UTC │ 19 Oct 25 17:12 UTC │
	│ ssh     │ cert-options-639932 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-639932       │ jenkins │ v1.37.0 │ 19 Oct 25 17:12 UTC │ 19 Oct 25 17:12 UTC │
	│ ssh     │ -p cert-options-639932 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-639932       │ jenkins │ v1.37.0 │ 19 Oct 25 17:12 UTC │ 19 Oct 25 17:12 UTC │
	│ delete  │ -p cert-options-639932                                                                                                                                                                                                                        │ cert-options-639932       │ jenkins │ v1.37.0 │ 19 Oct 25 17:12 UTC │ 19 Oct 25 17:12 UTC │
	│ start   │ -p kubernetes-upgrade-318879 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-318879 │ jenkins │ v1.37.0 │ 19 Oct 25 17:12 UTC │                     │
	│ start   │ -p missing-upgrade-447724 --memory=3072 --driver=docker  --container-runtime=crio                                                                                                                                                             │ missing-upgrade-447724    │ jenkins │ v1.32.0 │ 19 Oct 25 17:12 UTC │ 19 Oct 25 17:12 UTC │
	│ stop    │ stopped-upgrade-659566 stop                                                                                                                                                                                                                   │ stopped-upgrade-659566    │ jenkins │ v1.32.0 │ 19 Oct 25 17:12 UTC │ 19 Oct 25 17:12 UTC │
	│ start   │ -p stopped-upgrade-659566 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                      │ stopped-upgrade-659566    │ jenkins │ v1.37.0 │ 19 Oct 25 17:12 UTC │ 19 Oct 25 17:12 UTC │
	│ start   │ -p missing-upgrade-447724 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                      │ missing-upgrade-447724    │ jenkins │ v1.37.0 │ 19 Oct 25 17:12 UTC │ 19 Oct 25 17:13 UTC │
	│ delete  │ -p stopped-upgrade-659566                                                                                                                                                                                                                     │ stopped-upgrade-659566    │ jenkins │ v1.37.0 │ 19 Oct 25 17:12 UTC │ 19 Oct 25 17:13 UTC │
	│ start   │ -p old-k8s-version-904967 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-904967    │ jenkins │ v1.37.0 │ 19 Oct 25 17:13 UTC │ 19 Oct 25 17:13 UTC │
	│ delete  │ -p missing-upgrade-447724                                                                                                                                                                                                                     │ missing-upgrade-447724    │ jenkins │ v1.37.0 │ 19 Oct 25 17:13 UTC │ 19 Oct 25 17:13 UTC │
	│ start   │ -p no-preload-806996 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-806996         │ jenkins │ v1.37.0 │ 19 Oct 25 17:13 UTC │ 19 Oct 25 17:14 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-904967 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-904967    │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │                     │
	│ stop    │ -p old-k8s-version-904967 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-904967    │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │ 19 Oct 25 17:14 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-904967 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-904967    │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │ 19 Oct 25 17:14 UTC │
	│ start   │ -p old-k8s-version-904967 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-904967    │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-806996 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-806996         │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 17:14:19
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 17:14:19.959160  241081 out.go:360] Setting OutFile to fd 1 ...
	I1019 17:14:19.959426  241081 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:14:19.959435  241081 out.go:374] Setting ErrFile to fd 2...
	I1019 17:14:19.959440  241081 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:14:19.959655  241081 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3731/.minikube/bin
	I1019 17:14:19.960148  241081 out.go:368] Setting JSON to false
	I1019 17:14:19.961339  241081 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3406,"bootTime":1760890654,"procs":313,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 17:14:19.961425  241081 start.go:143] virtualization: kvm guest
	I1019 17:14:19.963662  241081 out.go:179] * [old-k8s-version-904967] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 17:14:19.965237  241081 notify.go:221] Checking for updates...
	I1019 17:14:19.965288  241081 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 17:14:19.969282  241081 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 17:14:19.970869  241081 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-3731/kubeconfig
	I1019 17:14:19.972182  241081 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-3731/.minikube
	I1019 17:14:19.973466  241081 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 17:14:19.974693  241081 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 17:14:19.976537  241081 config.go:182] Loaded profile config "old-k8s-version-904967": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1019 17:14:19.978469  241081 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1019 17:14:19.979515  241081 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 17:14:20.005439  241081 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1019 17:14:20.005537  241081 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:14:20.071475  241081 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-19 17:14:20.059887614 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 17:14:20.071635  241081 docker.go:319] overlay module found
	I1019 17:14:20.073326  241081 out.go:179] * Using the docker driver based on existing profile
	I1019 17:14:20.074548  241081 start.go:309] selected driver: docker
	I1019 17:14:20.074568  241081 start.go:930] validating driver "docker" against &{Name:old-k8s-version-904967 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-904967 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:14:20.074700  241081 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 17:14:20.075376  241081 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:14:20.140510  241081 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-19 17:14:20.129394477 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 17:14:20.140916  241081 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 17:14:20.140977  241081 cni.go:84] Creating CNI manager for ""
	I1019 17:14:20.141047  241081 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:14:20.141138  241081 start.go:353] cluster config:
	{Name:old-k8s-version-904967 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-904967 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:14:20.145108  241081 out.go:179] * Starting "old-k8s-version-904967" primary control-plane node in "old-k8s-version-904967" cluster
	I1019 17:14:20.146366  241081 cache.go:124] Beginning downloading kic base image for docker with crio
	I1019 17:14:20.147744  241081 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 17:14:20.149000  241081 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1019 17:14:20.149059  241081 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-3731/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1019 17:14:20.149108  241081 cache.go:59] Caching tarball of preloaded images
	I1019 17:14:20.149111  241081 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 17:14:20.149230  241081 preload.go:233] Found /home/jenkins/minikube-integration/21683-3731/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1019 17:14:20.149249  241081 cache.go:62] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1019 17:14:20.149381  241081 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/old-k8s-version-904967/config.json ...
	I1019 17:14:20.170989  241081 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 17:14:20.171008  241081 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 17:14:20.171025  241081 cache.go:233] Successfully downloaded all kic artifacts
	I1019 17:14:20.171055  241081 start.go:360] acquireMachinesLock for old-k8s-version-904967: {Name:mkc44a577dfa0377f341fb5c99981aa03168fb9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 17:14:20.171149  241081 start.go:364] duration metric: took 48.99µs to acquireMachinesLock for "old-k8s-version-904967"
	I1019 17:14:20.171172  241081 start.go:96] Skipping create...Using existing machine configuration
	I1019 17:14:20.171179  241081 fix.go:54] fixHost starting: 
	I1019 17:14:20.171377  241081 cli_runner.go:164] Run: docker container inspect old-k8s-version-904967 --format={{.State.Status}}
	I1019 17:14:20.191017  241081 fix.go:112] recreateIfNeeded on old-k8s-version-904967: state=Stopped err=<nil>
	W1019 17:14:20.191057  241081 fix.go:138] unexpected machine state, will restart: <nil>
	W1019 17:14:18.583843  234083 node_ready.go:57] node "no-preload-806996" has "Ready":"False" status (will retry)
	W1019 17:14:20.588598  234083 node_ready.go:57] node "no-preload-806996" has "Ready":"False" status (will retry)
	W1019 17:14:23.082975  234083 node_ready.go:57] node "no-preload-806996" has "Ready":"False" status (will retry)
	I1019 17:14:19.508220  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:14:19.508715  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1019 17:14:19.508775  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1019 17:14:19.508825  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1019 17:14:19.538766  219832 cri.go:89] found id: "f0e30211e9bc4bf09df46743958d90e765ea8f87bb974a2cd7fad5a9df90b0ca"
	I1019 17:14:19.538787  219832 cri.go:89] found id: ""
	I1019 17:14:19.538795  219832 logs.go:282] 1 containers: [f0e30211e9bc4bf09df46743958d90e765ea8f87bb974a2cd7fad5a9df90b0ca]
	I1019 17:14:19.538861  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:14:19.543681  219832 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1019 17:14:19.543748  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1019 17:14:19.574949  219832 cri.go:89] found id: ""
	I1019 17:14:19.574971  219832 logs.go:282] 0 containers: []
	W1019 17:14:19.574984  219832 logs.go:284] No container was found matching "etcd"
	I1019 17:14:19.574989  219832 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1019 17:14:19.575053  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1019 17:14:19.607784  219832 cri.go:89] found id: ""
	I1019 17:14:19.607813  219832 logs.go:282] 0 containers: []
	W1019 17:14:19.607823  219832 logs.go:284] No container was found matching "coredns"
	I1019 17:14:19.607830  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1019 17:14:19.607882  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1019 17:14:19.639298  219832 cri.go:89] found id: "409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:14:19.639324  219832 cri.go:89] found id: ""
	I1019 17:14:19.639334  219832 logs.go:282] 1 containers: [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f]
	I1019 17:14:19.639398  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:14:19.643616  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1019 17:14:19.643687  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1019 17:14:19.674292  219832 cri.go:89] found id: ""
	I1019 17:14:19.674317  219832 logs.go:282] 0 containers: []
	W1019 17:14:19.674327  219832 logs.go:284] No container was found matching "kube-proxy"
	I1019 17:14:19.674334  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1019 17:14:19.674397  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1019 17:14:19.703854  219832 cri.go:89] found id: "5fcaafbb6b2ff370f143bccc65a03c3cc8c597c0a3ccb77051d8f843f143626f"
	I1019 17:14:19.703876  219832 cri.go:89] found id: ""
	I1019 17:14:19.703884  219832 logs.go:282] 1 containers: [5fcaafbb6b2ff370f143bccc65a03c3cc8c597c0a3ccb77051d8f843f143626f]
	I1019 17:14:19.703929  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:14:19.708299  219832 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1019 17:14:19.708363  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1019 17:14:19.739364  219832 cri.go:89] found id: ""
	I1019 17:14:19.739385  219832 logs.go:282] 0 containers: []
	W1019 17:14:19.739392  219832 logs.go:284] No container was found matching "kindnet"
	I1019 17:14:19.739398  219832 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1019 17:14:19.739443  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1019 17:14:19.772127  219832 cri.go:89] found id: ""
	I1019 17:14:19.772150  219832 logs.go:282] 0 containers: []
	W1019 17:14:19.772160  219832 logs.go:284] No container was found matching "storage-provisioner"
	I1019 17:14:19.772169  219832 logs.go:123] Gathering logs for CRI-O ...
	I1019 17:14:19.772181  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1019 17:14:19.824279  219832 logs.go:123] Gathering logs for container status ...
	I1019 17:14:19.824312  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1019 17:14:19.857506  219832 logs.go:123] Gathering logs for kubelet ...
	I1019 17:14:19.857532  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1019 17:14:19.938806  219832 logs.go:123] Gathering logs for dmesg ...
	I1019 17:14:19.938842  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1019 17:14:19.954944  219832 logs.go:123] Gathering logs for describe nodes ...
	I1019 17:14:19.954977  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1019 17:14:20.020117  219832 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1019 17:14:20.020183  219832 logs.go:123] Gathering logs for kube-apiserver [f0e30211e9bc4bf09df46743958d90e765ea8f87bb974a2cd7fad5a9df90b0ca] ...
	I1019 17:14:20.020201  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f0e30211e9bc4bf09df46743958d90e765ea8f87bb974a2cd7fad5a9df90b0ca"
	I1019 17:14:20.060748  219832 logs.go:123] Gathering logs for kube-scheduler [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f] ...
	I1019 17:14:20.060784  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:14:20.119816  219832 logs.go:123] Gathering logs for kube-controller-manager [5fcaafbb6b2ff370f143bccc65a03c3cc8c597c0a3ccb77051d8f843f143626f] ...
	I1019 17:14:20.119850  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5fcaafbb6b2ff370f143bccc65a03c3cc8c597c0a3ccb77051d8f843f143626f"
	I1019 17:14:22.652133  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:14:22.652570  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1019 17:14:22.652632  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1019 17:14:22.652687  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1019 17:14:22.681735  219832 cri.go:89] found id: "f0e30211e9bc4bf09df46743958d90e765ea8f87bb974a2cd7fad5a9df90b0ca"
	I1019 17:14:22.681772  219832 cri.go:89] found id: ""
	I1019 17:14:22.681782  219832 logs.go:282] 1 containers: [f0e30211e9bc4bf09df46743958d90e765ea8f87bb974a2cd7fad5a9df90b0ca]
	I1019 17:14:22.681833  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:14:22.685990  219832 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1019 17:14:22.686053  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1019 17:14:22.714686  219832 cri.go:89] found id: ""
	I1019 17:14:22.714712  219832 logs.go:282] 0 containers: []
	W1019 17:14:22.714720  219832 logs.go:284] No container was found matching "etcd"
	I1019 17:14:22.714727  219832 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1019 17:14:22.714794  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1019 17:14:22.743236  219832 cri.go:89] found id: ""
	I1019 17:14:22.743259  219832 logs.go:282] 0 containers: []
	W1019 17:14:22.743266  219832 logs.go:284] No container was found matching "coredns"
	I1019 17:14:22.743272  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1019 17:14:22.743320  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1019 17:14:22.771755  219832 cri.go:89] found id: "409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:14:22.771777  219832 cri.go:89] found id: ""
	I1019 17:14:22.771788  219832 logs.go:282] 1 containers: [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f]
	I1019 17:14:22.771839  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:14:22.776362  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1019 17:14:22.776438  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1019 17:14:22.805178  219832 cri.go:89] found id: ""
	I1019 17:14:22.805203  219832 logs.go:282] 0 containers: []
	W1019 17:14:22.805212  219832 logs.go:284] No container was found matching "kube-proxy"
	I1019 17:14:22.805218  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1019 17:14:22.805272  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1019 17:14:22.834903  219832 cri.go:89] found id: "5fcaafbb6b2ff370f143bccc65a03c3cc8c597c0a3ccb77051d8f843f143626f"
	I1019 17:14:22.834928  219832 cri.go:89] found id: ""
	I1019 17:14:22.834938  219832 logs.go:282] 1 containers: [5fcaafbb6b2ff370f143bccc65a03c3cc8c597c0a3ccb77051d8f843f143626f]
	I1019 17:14:22.834996  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:14:22.839007  219832 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1019 17:14:22.839090  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1019 17:14:22.865560  219832 cri.go:89] found id: ""
	I1019 17:14:22.865592  219832 logs.go:282] 0 containers: []
	W1019 17:14:22.865603  219832 logs.go:284] No container was found matching "kindnet"
	I1019 17:14:22.865610  219832 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1019 17:14:22.865658  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1019 17:14:22.892447  219832 cri.go:89] found id: ""
	I1019 17:14:22.892476  219832 logs.go:282] 0 containers: []
	W1019 17:14:22.892488  219832 logs.go:284] No container was found matching "storage-provisioner"
	I1019 17:14:22.892499  219832 logs.go:123] Gathering logs for container status ...
	I1019 17:14:22.892514  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1019 17:14:22.923636  219832 logs.go:123] Gathering logs for kubelet ...
	I1019 17:14:22.923665  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1019 17:14:22.999280  219832 logs.go:123] Gathering logs for dmesg ...
	I1019 17:14:22.999316  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1019 17:14:23.015218  219832 logs.go:123] Gathering logs for describe nodes ...
	I1019 17:14:23.015245  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1019 17:14:23.073672  219832 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1019 17:14:23.073692  219832 logs.go:123] Gathering logs for kube-apiserver [f0e30211e9bc4bf09df46743958d90e765ea8f87bb974a2cd7fad5a9df90b0ca] ...
	I1019 17:14:23.073704  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f0e30211e9bc4bf09df46743958d90e765ea8f87bb974a2cd7fad5a9df90b0ca"
	I1019 17:14:23.106735  219832 logs.go:123] Gathering logs for kube-scheduler [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f] ...
	I1019 17:14:23.106764  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:14:23.154646  219832 logs.go:123] Gathering logs for kube-controller-manager [5fcaafbb6b2ff370f143bccc65a03c3cc8c597c0a3ccb77051d8f843f143626f] ...
	I1019 17:14:23.154680  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5fcaafbb6b2ff370f143bccc65a03c3cc8c597c0a3ccb77051d8f843f143626f"
	I1019 17:14:23.182453  219832 logs.go:123] Gathering logs for CRI-O ...
	I1019 17:14:23.182481  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1019 17:14:20.192878  241081 out.go:252] * Restarting existing docker container for "old-k8s-version-904967" ...
	I1019 17:14:20.192953  241081 cli_runner.go:164] Run: docker start old-k8s-version-904967
	I1019 17:14:20.456944  241081 cli_runner.go:164] Run: docker container inspect old-k8s-version-904967 --format={{.State.Status}}
	I1019 17:14:20.477322  241081 kic.go:430] container "old-k8s-version-904967" state is running.
	I1019 17:14:20.477725  241081 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-904967
	I1019 17:14:20.497285  241081 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/old-k8s-version-904967/config.json ...
	I1019 17:14:20.497568  241081 machine.go:94] provisionDockerMachine start ...
	I1019 17:14:20.497638  241081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-904967
	I1019 17:14:20.517685  241081 main.go:143] libmachine: Using SSH client type: native
	I1019 17:14:20.517943  241081 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33064 <nil> <nil>}
	I1019 17:14:20.517957  241081 main.go:143] libmachine: About to run SSH command:
	hostname
	I1019 17:14:20.518670  241081 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52370->127.0.0.1:33064: read: connection reset by peer
	I1019 17:14:23.657232  241081 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-904967
	
	I1019 17:14:23.657260  241081 ubuntu.go:182] provisioning hostname "old-k8s-version-904967"
	I1019 17:14:23.657325  241081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-904967
	I1019 17:14:23.677024  241081 main.go:143] libmachine: Using SSH client type: native
	I1019 17:14:23.677309  241081 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33064 <nil> <nil>}
	I1019 17:14:23.677330  241081 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-904967 && echo "old-k8s-version-904967" | sudo tee /etc/hostname
	I1019 17:14:23.828261  241081 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-904967
	
	I1019 17:14:23.828372  241081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-904967
	I1019 17:14:23.851959  241081 main.go:143] libmachine: Using SSH client type: native
	I1019 17:14:23.852244  241081 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33064 <nil> <nil>}
	I1019 17:14:23.852267  241081 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-904967' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-904967/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-904967' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 17:14:23.991269  241081 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1019 17:14:23.991297  241081 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-3731/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-3731/.minikube}
	I1019 17:14:23.991337  241081 ubuntu.go:190] setting up certificates
	I1019 17:14:23.991349  241081 provision.go:84] configureAuth start
	I1019 17:14:23.991406  241081 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-904967
	I1019 17:14:24.010594  241081 provision.go:143] copyHostCerts
	I1019 17:14:24.010673  241081 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-3731/.minikube/key.pem, removing ...
	I1019 17:14:24.010718  241081 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-3731/.minikube/key.pem
	I1019 17:14:24.010818  241081 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-3731/.minikube/key.pem (1679 bytes)
	I1019 17:14:24.010963  241081 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-3731/.minikube/ca.pem, removing ...
	I1019 17:14:24.010977  241081 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-3731/.minikube/ca.pem
	I1019 17:14:24.011019  241081 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-3731/.minikube/ca.pem (1082 bytes)
	I1019 17:14:24.011146  241081 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-3731/.minikube/cert.pem, removing ...
	I1019 17:14:24.011157  241081 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-3731/.minikube/cert.pem
	I1019 17:14:24.011198  241081 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-3731/.minikube/cert.pem (1123 bytes)
	I1019 17:14:24.011293  241081 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-3731/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-904967 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-904967]
	I1019 17:14:24.175278  241081 provision.go:177] copyRemoteCerts
	I1019 17:14:24.175336  241081 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 17:14:24.175368  241081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-904967
	I1019 17:14:24.194567  241081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/old-k8s-version-904967/id_rsa Username:docker}
	I1019 17:14:24.293545  241081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 17:14:24.312590  241081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1019 17:14:24.332381  241081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 17:14:24.351462  241081 provision.go:87] duration metric: took 360.081029ms to configureAuth
	I1019 17:14:24.351491  241081 ubuntu.go:206] setting minikube options for container-runtime
	I1019 17:14:24.351653  241081 config.go:182] Loaded profile config "old-k8s-version-904967": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1019 17:14:24.351763  241081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-904967
	I1019 17:14:24.371134  241081 main.go:143] libmachine: Using SSH client type: native
	I1019 17:14:24.371359  241081 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33064 <nil> <nil>}
	I1019 17:14:24.371375  241081 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 17:14:24.668312  241081 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 17:14:24.668340  241081 machine.go:97] duration metric: took 4.17075294s to provisionDockerMachine
	I1019 17:14:24.668352  241081 start.go:293] postStartSetup for "old-k8s-version-904967" (driver="docker")
	I1019 17:14:24.668368  241081 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 17:14:24.668437  241081 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 17:14:24.668486  241081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-904967
	I1019 17:14:24.690218  241081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/old-k8s-version-904967/id_rsa Username:docker}
	I1019 17:14:24.795411  241081 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 17:14:24.799401  241081 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 17:14:24.799437  241081 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 17:14:24.799451  241081 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-3731/.minikube/addons for local assets ...
	I1019 17:14:24.799523  241081 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-3731/.minikube/files for local assets ...
	I1019 17:14:24.799634  241081 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-3731/.minikube/files/etc/ssl/certs/72282.pem -> 72282.pem in /etc/ssl/certs
	I1019 17:14:24.799774  241081 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 17:14:24.808637  241081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/files/etc/ssl/certs/72282.pem --> /etc/ssl/certs/72282.pem (1708 bytes)
	I1019 17:14:24.828144  241081 start.go:296] duration metric: took 159.777462ms for postStartSetup
	I1019 17:14:24.828232  241081 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 17:14:24.828277  241081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-904967
	I1019 17:14:24.847836  241081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/old-k8s-version-904967/id_rsa Username:docker}
	I1019 17:14:24.943424  241081 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 17:14:24.948300  241081 fix.go:56] duration metric: took 4.77711343s for fixHost
	I1019 17:14:24.948326  241081 start.go:83] releasing machines lock for "old-k8s-version-904967", held for 4.77716337s
	I1019 17:14:24.948416  241081 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-904967
	I1019 17:14:24.966625  241081 ssh_runner.go:195] Run: cat /version.json
	I1019 17:14:24.966679  241081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-904967
	I1019 17:14:24.966737  241081 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 17:14:24.966823  241081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-904967
	I1019 17:14:24.986698  241081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/old-k8s-version-904967/id_rsa Username:docker}
	I1019 17:14:24.987048  241081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/old-k8s-version-904967/id_rsa Username:docker}
	I1019 17:14:25.082250  241081 ssh_runner.go:195] Run: systemctl --version
	I1019 17:14:25.143476  241081 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 17:14:25.179942  241081 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 17:14:25.185576  241081 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 17:14:25.185633  241081 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 17:14:25.194900  241081 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1019 17:14:25.194927  241081 start.go:496] detecting cgroup driver to use...
	I1019 17:14:25.194958  241081 detect.go:190] detected "systemd" cgroup driver on host os
	I1019 17:14:25.195021  241081 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 17:14:25.210985  241081 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 17:14:25.226445  241081 docker.go:218] disabling cri-docker service (if available) ...
	I1019 17:14:25.226504  241081 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 17:14:25.242391  241081 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 17:14:25.256838  241081 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 17:14:25.343544  241081 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 17:14:25.428236  241081 docker.go:234] disabling docker service ...
	I1019 17:14:25.428317  241081 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 17:14:25.443320  241081 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 17:14:25.456928  241081 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 17:14:25.538554  241081 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 17:14:25.627400  241081 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 17:14:25.641221  241081 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 17:14:25.656564  241081 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1019 17:14:25.656641  241081 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:14:25.666849  241081 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1019 17:14:25.666932  241081 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:14:25.677254  241081 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:14:25.687426  241081 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:14:25.697785  241081 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 17:14:25.707576  241081 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:14:25.718435  241081 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:14:25.727957  241081 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:14:25.737858  241081 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 17:14:25.745801  241081 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 17:14:25.754482  241081 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:14:25.851647  241081 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 17:14:25.971595  241081 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 17:14:25.971667  241081 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 17:14:25.976242  241081 start.go:564] Will wait 60s for crictl version
	I1019 17:14:25.976294  241081 ssh_runner.go:195] Run: which crictl
	I1019 17:14:25.980870  241081 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 17:14:26.010819  241081 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 17:14:26.010909  241081 ssh_runner.go:195] Run: crio --version
	I1019 17:14:26.044180  241081 ssh_runner.go:195] Run: crio --version
	I1019 17:14:26.075665  241081 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1019 17:14:23.583393  234083 node_ready.go:49] node "no-preload-806996" is "Ready"
	I1019 17:14:23.583429  234083 node_ready.go:38] duration metric: took 12.003382397s for node "no-preload-806996" to be "Ready" ...
	I1019 17:14:23.583447  234083 api_server.go:52] waiting for apiserver process to appear ...
	I1019 17:14:23.583503  234083 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 17:14:23.596203  234083 api_server.go:72] duration metric: took 12.287241729s to wait for apiserver process to appear ...
	I1019 17:14:23.596228  234083 api_server.go:88] waiting for apiserver healthz status ...
	I1019 17:14:23.596244  234083 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1019 17:14:23.600924  234083 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1019 17:14:23.601943  234083 api_server.go:141] control plane version: v1.34.1
	I1019 17:14:23.601973  234083 api_server.go:131] duration metric: took 5.738828ms to wait for apiserver health ...
	I1019 17:14:23.601985  234083 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 17:14:23.605609  234083 system_pods.go:59] 8 kube-system pods found
	I1019 17:14:23.605642  234083 system_pods.go:61] "coredns-66bc5c9577-s4dxw" [c70058e7-34d5-4394-843b-329c5916e0d2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:14:23.605651  234083 system_pods.go:61] "etcd-no-preload-806996" [7880cf1a-b8d2-4270-a73d-85f886e5148e] Running
	I1019 17:14:23.605659  234083 system_pods.go:61] "kindnet-zndcx" [1b78605a-6dea-41a1-9322-0042dffaf352] Running
	I1019 17:14:23.605664  234083 system_pods.go:61] "kube-apiserver-no-preload-806996" [67918b03-cbf6-45e9-94c7-0ed7d6ed83f2] Running
	I1019 17:14:23.605670  234083 system_pods.go:61] "kube-controller-manager-no-preload-806996" [ab1b3b13-57bf-404e-9db6-45c581228ff2] Running
	I1019 17:14:23.605675  234083 system_pods.go:61] "kube-proxy-76f5v" [80cf4856-a9c9-4c35-847d-a1d94f45adc1] Running
	I1019 17:14:23.605680  234083 system_pods.go:61] "kube-scheduler-no-preload-806996" [1cd39f15-0112-4c30-89e7-b419922ba57f] Running
	I1019 17:14:23.605688  234083 system_pods.go:61] "storage-provisioner" [464b7dd6-a5a2-44da-97ba-d2ba712ff9cd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 17:14:23.605706  234083 system_pods.go:74] duration metric: took 3.703943ms to wait for pod list to return data ...
	I1019 17:14:23.605720  234083 default_sa.go:34] waiting for default service account to be created ...
	I1019 17:14:23.608389  234083 default_sa.go:45] found service account: "default"
	I1019 17:14:23.608409  234083 default_sa.go:55] duration metric: took 2.672955ms for default service account to be created ...
	I1019 17:14:23.608419  234083 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 17:14:23.611467  234083 system_pods.go:86] 8 kube-system pods found
	I1019 17:14:23.611503  234083 system_pods.go:89] "coredns-66bc5c9577-s4dxw" [c70058e7-34d5-4394-843b-329c5916e0d2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:14:23.611509  234083 system_pods.go:89] "etcd-no-preload-806996" [7880cf1a-b8d2-4270-a73d-85f886e5148e] Running
	I1019 17:14:23.611515  234083 system_pods.go:89] "kindnet-zndcx" [1b78605a-6dea-41a1-9322-0042dffaf352] Running
	I1019 17:14:23.611518  234083 system_pods.go:89] "kube-apiserver-no-preload-806996" [67918b03-cbf6-45e9-94c7-0ed7d6ed83f2] Running
	I1019 17:14:23.611522  234083 system_pods.go:89] "kube-controller-manager-no-preload-806996" [ab1b3b13-57bf-404e-9db6-45c581228ff2] Running
	I1019 17:14:23.611525  234083 system_pods.go:89] "kube-proxy-76f5v" [80cf4856-a9c9-4c35-847d-a1d94f45adc1] Running
	I1019 17:14:23.611528  234083 system_pods.go:89] "kube-scheduler-no-preload-806996" [1cd39f15-0112-4c30-89e7-b419922ba57f] Running
	I1019 17:14:23.611533  234083 system_pods.go:89] "storage-provisioner" [464b7dd6-a5a2-44da-97ba-d2ba712ff9cd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 17:14:23.611565  234083 retry.go:31] will retry after 279.366819ms: missing components: kube-dns
	I1019 17:14:23.895589  234083 system_pods.go:86] 8 kube-system pods found
	I1019 17:14:23.895630  234083 system_pods.go:89] "coredns-66bc5c9577-s4dxw" [c70058e7-34d5-4394-843b-329c5916e0d2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:14:23.895638  234083 system_pods.go:89] "etcd-no-preload-806996" [7880cf1a-b8d2-4270-a73d-85f886e5148e] Running
	I1019 17:14:23.895648  234083 system_pods.go:89] "kindnet-zndcx" [1b78605a-6dea-41a1-9322-0042dffaf352] Running
	I1019 17:14:23.895654  234083 system_pods.go:89] "kube-apiserver-no-preload-806996" [67918b03-cbf6-45e9-94c7-0ed7d6ed83f2] Running
	I1019 17:14:23.895660  234083 system_pods.go:89] "kube-controller-manager-no-preload-806996" [ab1b3b13-57bf-404e-9db6-45c581228ff2] Running
	I1019 17:14:23.895668  234083 system_pods.go:89] "kube-proxy-76f5v" [80cf4856-a9c9-4c35-847d-a1d94f45adc1] Running
	I1019 17:14:23.895671  234083 system_pods.go:89] "kube-scheduler-no-preload-806996" [1cd39f15-0112-4c30-89e7-b419922ba57f] Running
	I1019 17:14:23.895675  234083 system_pods.go:89] "storage-provisioner" [464b7dd6-a5a2-44da-97ba-d2ba712ff9cd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 17:14:23.895701  234083 retry.go:31] will retry after 389.190381ms: missing components: kube-dns
	I1019 17:14:24.288736  234083 system_pods.go:86] 8 kube-system pods found
	I1019 17:14:24.288768  234083 system_pods.go:89] "coredns-66bc5c9577-s4dxw" [c70058e7-34d5-4394-843b-329c5916e0d2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:14:24.288774  234083 system_pods.go:89] "etcd-no-preload-806996" [7880cf1a-b8d2-4270-a73d-85f886e5148e] Running
	I1019 17:14:24.288779  234083 system_pods.go:89] "kindnet-zndcx" [1b78605a-6dea-41a1-9322-0042dffaf352] Running
	I1019 17:14:24.288782  234083 system_pods.go:89] "kube-apiserver-no-preload-806996" [67918b03-cbf6-45e9-94c7-0ed7d6ed83f2] Running
	I1019 17:14:24.288786  234083 system_pods.go:89] "kube-controller-manager-no-preload-806996" [ab1b3b13-57bf-404e-9db6-45c581228ff2] Running
	I1019 17:14:24.288789  234083 system_pods.go:89] "kube-proxy-76f5v" [80cf4856-a9c9-4c35-847d-a1d94f45adc1] Running
	I1019 17:14:24.288792  234083 system_pods.go:89] "kube-scheduler-no-preload-806996" [1cd39f15-0112-4c30-89e7-b419922ba57f] Running
	I1019 17:14:24.288797  234083 system_pods.go:89] "storage-provisioner" [464b7dd6-a5a2-44da-97ba-d2ba712ff9cd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 17:14:24.288810  234083 retry.go:31] will retry after 347.801707ms: missing components: kube-dns
	I1019 17:14:24.641082  234083 system_pods.go:86] 8 kube-system pods found
	I1019 17:14:24.641123  234083 system_pods.go:89] "coredns-66bc5c9577-s4dxw" [c70058e7-34d5-4394-843b-329c5916e0d2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:14:24.641132  234083 system_pods.go:89] "etcd-no-preload-806996" [7880cf1a-b8d2-4270-a73d-85f886e5148e] Running
	I1019 17:14:24.641140  234083 system_pods.go:89] "kindnet-zndcx" [1b78605a-6dea-41a1-9322-0042dffaf352] Running
	I1019 17:14:24.641147  234083 system_pods.go:89] "kube-apiserver-no-preload-806996" [67918b03-cbf6-45e9-94c7-0ed7d6ed83f2] Running
	I1019 17:14:24.641154  234083 system_pods.go:89] "kube-controller-manager-no-preload-806996" [ab1b3b13-57bf-404e-9db6-45c581228ff2] Running
	I1019 17:14:24.641159  234083 system_pods.go:89] "kube-proxy-76f5v" [80cf4856-a9c9-4c35-847d-a1d94f45adc1] Running
	I1019 17:14:24.641164  234083 system_pods.go:89] "kube-scheduler-no-preload-806996" [1cd39f15-0112-4c30-89e7-b419922ba57f] Running
	I1019 17:14:24.641174  234083 system_pods.go:89] "storage-provisioner" [464b7dd6-a5a2-44da-97ba-d2ba712ff9cd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 17:14:24.641192  234083 retry.go:31] will retry after 460.563279ms: missing components: kube-dns
	I1019 17:14:25.106664  234083 system_pods.go:86] 8 kube-system pods found
	I1019 17:14:25.106706  234083 system_pods.go:89] "coredns-66bc5c9577-s4dxw" [c70058e7-34d5-4394-843b-329c5916e0d2] Running
	I1019 17:14:25.106712  234083 system_pods.go:89] "etcd-no-preload-806996" [7880cf1a-b8d2-4270-a73d-85f886e5148e] Running
	I1019 17:14:25.106715  234083 system_pods.go:89] "kindnet-zndcx" [1b78605a-6dea-41a1-9322-0042dffaf352] Running
	I1019 17:14:25.106719  234083 system_pods.go:89] "kube-apiserver-no-preload-806996" [67918b03-cbf6-45e9-94c7-0ed7d6ed83f2] Running
	I1019 17:14:25.106737  234083 system_pods.go:89] "kube-controller-manager-no-preload-806996" [ab1b3b13-57bf-404e-9db6-45c581228ff2] Running
	I1019 17:14:25.106741  234083 system_pods.go:89] "kube-proxy-76f5v" [80cf4856-a9c9-4c35-847d-a1d94f45adc1] Running
	I1019 17:14:25.106744  234083 system_pods.go:89] "kube-scheduler-no-preload-806996" [1cd39f15-0112-4c30-89e7-b419922ba57f] Running
	I1019 17:14:25.106747  234083 system_pods.go:89] "storage-provisioner" [464b7dd6-a5a2-44da-97ba-d2ba712ff9cd] Running
	I1019 17:14:25.106756  234083 system_pods.go:126] duration metric: took 1.498330862s to wait for k8s-apps to be running ...
	I1019 17:14:25.106764  234083 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 17:14:25.106820  234083 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:14:25.120583  234083 system_svc.go:56] duration metric: took 13.808966ms WaitForService to wait for kubelet
	I1019 17:14:25.120611  234083 kubeadm.go:587] duration metric: took 13.811656353s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 17:14:25.120631  234083 node_conditions.go:102] verifying NodePressure condition ...
	I1019 17:14:25.123789  234083 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1019 17:14:25.123819  234083 node_conditions.go:123] node cpu capacity is 8
	I1019 17:14:25.123833  234083 node_conditions.go:105] duration metric: took 3.195078ms to run NodePressure ...
	I1019 17:14:25.123846  234083 start.go:242] waiting for startup goroutines ...
	I1019 17:14:25.123852  234083 start.go:247] waiting for cluster config update ...
	I1019 17:14:25.123862  234083 start.go:256] writing updated cluster config ...
	I1019 17:14:25.124262  234083 ssh_runner.go:195] Run: rm -f paused
	I1019 17:14:25.129044  234083 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 17:14:25.206632  234083 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-s4dxw" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:14:25.211534  234083 pod_ready.go:94] pod "coredns-66bc5c9577-s4dxw" is "Ready"
	I1019 17:14:25.211555  234083 pod_ready.go:86] duration metric: took 4.890601ms for pod "coredns-66bc5c9577-s4dxw" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:14:25.213727  234083 pod_ready.go:83] waiting for pod "etcd-no-preload-806996" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:14:25.217650  234083 pod_ready.go:94] pod "etcd-no-preload-806996" is "Ready"
	I1019 17:14:25.217673  234083 pod_ready.go:86] duration metric: took 3.927479ms for pod "etcd-no-preload-806996" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:14:25.219946  234083 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-806996" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:14:25.223700  234083 pod_ready.go:94] pod "kube-apiserver-no-preload-806996" is "Ready"
	I1019 17:14:25.223728  234083 pod_ready.go:86] duration metric: took 3.761463ms for pod "kube-apiserver-no-preload-806996" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:14:25.225602  234083 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-806996" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:14:25.534158  234083 pod_ready.go:94] pod "kube-controller-manager-no-preload-806996" is "Ready"
	I1019 17:14:25.534188  234083 pod_ready.go:86] duration metric: took 308.561681ms for pod "kube-controller-manager-no-preload-806996" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:14:25.733172  234083 pod_ready.go:83] waiting for pod "kube-proxy-76f5v" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:14:26.133692  234083 pod_ready.go:94] pod "kube-proxy-76f5v" is "Ready"
	I1019 17:14:26.133732  234083 pod_ready.go:86] duration metric: took 400.525638ms for pod "kube-proxy-76f5v" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:14:26.334406  234083 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-806996" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:14:26.733959  234083 pod_ready.go:94] pod "kube-scheduler-no-preload-806996" is "Ready"
	I1019 17:14:26.733985  234083 pod_ready.go:86] duration metric: took 399.548758ms for pod "kube-scheduler-no-preload-806996" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:14:26.734003  234083 pod_ready.go:40] duration metric: took 1.604897394s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 17:14:26.780752  234083 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1019 17:14:26.783519  234083 out.go:179] * Done! kubectl is now configured to use "no-preload-806996" cluster and "default" namespace by default
	I1019 17:14:26.076962  241081 cli_runner.go:164] Run: docker network inspect old-k8s-version-904967 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 17:14:26.096205  241081 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1019 17:14:26.100910  241081 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:14:26.112419  241081 kubeadm.go:884] updating cluster {Name:old-k8s-version-904967 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-904967 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 17:14:26.112527  241081 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1019 17:14:26.112577  241081 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:14:26.148699  241081 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:14:26.148723  241081 crio.go:433] Images already preloaded, skipping extraction
	I1019 17:14:26.148783  241081 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:14:26.180797  241081 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:14:26.180822  241081 cache_images.go:86] Images are preloaded, skipping loading
	I1019 17:14:26.180831  241081 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1019 17:14:26.180953  241081 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-904967 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-904967 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 17:14:26.181037  241081 ssh_runner.go:195] Run: crio config
	I1019 17:14:26.233452  241081 cni.go:84] Creating CNI manager for ""
	I1019 17:14:26.233470  241081 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:14:26.233482  241081 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 17:14:26.233502  241081 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-904967 NodeName:old-k8s-version-904967 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 17:14:26.233654  241081 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-904967"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 17:14:26.233722  241081 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1019 17:14:26.243778  241081 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 17:14:26.243855  241081 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 17:14:26.252920  241081 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1019 17:14:26.267191  241081 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 17:14:26.280547  241081 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1019 17:14:26.293692  241081 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1019 17:14:26.298596  241081 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:14:26.310129  241081 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:14:26.396160  241081 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:14:26.424749  241081 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/old-k8s-version-904967 for IP: 192.168.85.2
	I1019 17:14:26.424773  241081 certs.go:195] generating shared ca certs ...
	I1019 17:14:26.424796  241081 certs.go:227] acquiring lock for ca certs: {Name:mk4c4a2dfd94a54e3626e99fce4b6f5183eeaf4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:14:26.424941  241081 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-3731/.minikube/ca.key
	I1019 17:14:26.424993  241081 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-3731/.minikube/proxy-client-ca.key
	I1019 17:14:26.425007  241081 certs.go:257] generating profile certs ...
	I1019 17:14:26.425104  241081 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/old-k8s-version-904967/client.key
	I1019 17:14:26.425164  241081 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/old-k8s-version-904967/apiserver.key.0cbb3f78
	I1019 17:14:26.425202  241081 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/old-k8s-version-904967/proxy-client.key
	I1019 17:14:26.425319  241081 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/7228.pem (1338 bytes)
	W1019 17:14:26.425362  241081 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-3731/.minikube/certs/7228_empty.pem, impossibly tiny 0 bytes
	I1019 17:14:26.425369  241081 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca-key.pem (1675 bytes)
	I1019 17:14:26.425391  241081 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem (1082 bytes)
	I1019 17:14:26.425414  241081 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/cert.pem (1123 bytes)
	I1019 17:14:26.425435  241081 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/key.pem (1679 bytes)
	I1019 17:14:26.425478  241081 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/files/etc/ssl/certs/72282.pem (1708 bytes)
	I1019 17:14:26.426043  241081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 17:14:26.447961  241081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1019 17:14:26.470254  241081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 17:14:26.492789  241081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1019 17:14:26.517637  241081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/old-k8s-version-904967/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1019 17:14:26.538280  241081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/old-k8s-version-904967/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1019 17:14:26.556674  241081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/old-k8s-version-904967/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 17:14:26.575237  241081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/old-k8s-version-904967/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1019 17:14:26.593857  241081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 17:14:26.613016  241081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/certs/7228.pem --> /usr/share/ca-certificates/7228.pem (1338 bytes)
	I1019 17:14:26.634016  241081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/files/etc/ssl/certs/72282.pem --> /usr/share/ca-certificates/72282.pem (1708 bytes)
	I1019 17:14:26.652596  241081 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 17:14:26.666632  241081 ssh_runner.go:195] Run: openssl version
	I1019 17:14:26.673124  241081 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7228.pem && ln -fs /usr/share/ca-certificates/7228.pem /etc/ssl/certs/7228.pem"
	I1019 17:14:26.682470  241081 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7228.pem
	I1019 17:14:26.686828  241081 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 16:26 /usr/share/ca-certificates/7228.pem
	I1019 17:14:26.686886  241081 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7228.pem
	I1019 17:14:26.724600  241081 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7228.pem /etc/ssl/certs/51391683.0"
	I1019 17:14:26.733928  241081 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/72282.pem && ln -fs /usr/share/ca-certificates/72282.pem /etc/ssl/certs/72282.pem"
	I1019 17:14:26.744214  241081 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/72282.pem
	I1019 17:14:26.748710  241081 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 16:26 /usr/share/ca-certificates/72282.pem
	I1019 17:14:26.748772  241081 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/72282.pem
	I1019 17:14:26.787153  241081 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/72282.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 17:14:26.797566  241081 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 17:14:26.809768  241081 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:14:26.814798  241081 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 16:21 /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:14:26.814861  241081 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:14:26.853878  241081 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 17:14:26.863661  241081 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 17:14:26.868794  241081 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1019 17:14:26.907396  241081 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1019 17:14:26.952663  241081 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1019 17:14:27.000004  241081 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1019 17:14:27.055492  241081 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1019 17:14:27.107502  241081 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1019 17:14:27.150664  241081 kubeadm.go:401] StartCluster: {Name:old-k8s-version-904967 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-904967 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:14:27.150786  241081 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 17:14:27.150853  241081 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 17:14:27.185553  241081 cri.go:89] found id: "d585a77a4eff398d568fbaf843dc59dc0a8f11ceece1172b1b6499be37a6bc8c"
	I1019 17:14:27.185575  241081 cri.go:89] found id: "f8fee443a165e8c94dbca458d7be0af55ddfb347583a529bb18135d08cf99cda"
	I1019 17:14:27.185581  241081 cri.go:89] found id: "78ff50c78f7cce6ccee8c1e7478bfa6937ce35b306cb412c85a9d2a83a64face"
	I1019 17:14:27.185586  241081 cri.go:89] found id: "783eeba3fb702b2ab824254b8901f2f139f59ef0c6c596fed9712ff31faef63f"
	I1019 17:14:27.185590  241081 cri.go:89] found id: ""
	I1019 17:14:27.185644  241081 ssh_runner.go:195] Run: sudo runc list -f json
	W1019 17:14:27.197699  241081 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:14:27Z" level=error msg="open /run/runc: no such file or directory"
	I1019 17:14:27.197803  241081 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 17:14:27.206418  241081 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1019 17:14:27.206441  241081 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1019 17:14:27.206492  241081 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1019 17:14:27.215015  241081 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1019 17:14:27.215711  241081 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-904967" does not appear in /home/jenkins/minikube-integration/21683-3731/kubeconfig
	I1019 17:14:27.216191  241081 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-3731/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-904967" cluster setting kubeconfig missing "old-k8s-version-904967" context setting]
	I1019 17:14:27.217121  241081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/kubeconfig: {Name:mk4cdfafbafeae33568865a60cf929c656d55c5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:14:27.219161  241081 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1019 17:14:27.227661  241081 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1019 17:14:27.227702  241081 kubeadm.go:602] duration metric: took 21.255051ms to restartPrimaryControlPlane
	I1019 17:14:27.227714  241081 kubeadm.go:403] duration metric: took 77.068843ms to StartCluster
	I1019 17:14:27.227737  241081 settings.go:142] acquiring lock: {Name:mk205bd8d663b4a1850184d0f589700c0d2429b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:14:27.227812  241081 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-3731/kubeconfig
	I1019 17:14:27.229088  241081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/kubeconfig: {Name:mk4cdfafbafeae33568865a60cf929c656d55c5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:14:27.229405  241081 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 17:14:27.229455  241081 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 17:14:27.229564  241081 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-904967"
	I1019 17:14:27.229591  241081 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-904967"
	W1019 17:14:27.229604  241081 addons.go:248] addon storage-provisioner should already be in state true
	I1019 17:14:27.229633  241081 config.go:182] Loaded profile config "old-k8s-version-904967": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1019 17:14:27.229648  241081 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-904967"
	I1019 17:14:27.229654  241081 addons.go:70] Setting dashboard=true in profile "old-k8s-version-904967"
	I1019 17:14:27.229664  241081 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-904967"
	I1019 17:14:27.229678  241081 addons.go:239] Setting addon dashboard=true in "old-k8s-version-904967"
	W1019 17:14:27.229692  241081 addons.go:248] addon dashboard should already be in state true
	I1019 17:14:27.229718  241081 host.go:66] Checking if "old-k8s-version-904967" exists ...
	I1019 17:14:27.229639  241081 host.go:66] Checking if "old-k8s-version-904967" exists ...
	I1019 17:14:27.230006  241081 cli_runner.go:164] Run: docker container inspect old-k8s-version-904967 --format={{.State.Status}}
	I1019 17:14:27.230220  241081 cli_runner.go:164] Run: docker container inspect old-k8s-version-904967 --format={{.State.Status}}
	I1019 17:14:27.230412  241081 cli_runner.go:164] Run: docker container inspect old-k8s-version-904967 --format={{.State.Status}}
	I1019 17:14:27.231758  241081 out.go:179] * Verifying Kubernetes components...
	I1019 17:14:27.233292  241081 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:14:27.256977  241081 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1019 17:14:27.258631  241081 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-904967"
	W1019 17:14:27.258655  241081 addons.go:248] addon default-storageclass should already be in state true
	I1019 17:14:27.258684  241081 host.go:66] Checking if "old-k8s-version-904967" exists ...
	I1019 17:14:27.259171  241081 cli_runner.go:164] Run: docker container inspect old-k8s-version-904967 --format={{.State.Status}}
	I1019 17:14:27.259807  241081 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1019 17:14:27.259817  241081 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 17:14:25.731127  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:14:25.731502  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1019 17:14:25.731557  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1019 17:14:25.731613  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1019 17:14:25.760664  219832 cri.go:89] found id: "f0e30211e9bc4bf09df46743958d90e765ea8f87bb974a2cd7fad5a9df90b0ca"
	I1019 17:14:25.760690  219832 cri.go:89] found id: ""
	I1019 17:14:25.760701  219832 logs.go:282] 1 containers: [f0e30211e9bc4bf09df46743958d90e765ea8f87bb974a2cd7fad5a9df90b0ca]
	I1019 17:14:25.760759  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:14:25.765290  219832 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1019 17:14:25.765354  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1019 17:14:25.798714  219832 cri.go:89] found id: ""
	I1019 17:14:25.798743  219832 logs.go:282] 0 containers: []
	W1019 17:14:25.798754  219832 logs.go:284] No container was found matching "etcd"
	I1019 17:14:25.798761  219832 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1019 17:14:25.798823  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1019 17:14:25.827845  219832 cri.go:89] found id: ""
	I1019 17:14:25.827881  219832 logs.go:282] 0 containers: []
	W1019 17:14:25.827892  219832 logs.go:284] No container was found matching "coredns"
	I1019 17:14:25.827900  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1019 17:14:25.827960  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1019 17:14:25.859488  219832 cri.go:89] found id: "409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:14:25.859516  219832 cri.go:89] found id: ""
	I1019 17:14:25.859526  219832 logs.go:282] 1 containers: [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f]
	I1019 17:14:25.859585  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:14:25.864378  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1019 17:14:25.864445  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1019 17:14:25.892169  219832 cri.go:89] found id: ""
	I1019 17:14:25.892201  219832 logs.go:282] 0 containers: []
	W1019 17:14:25.892212  219832 logs.go:284] No container was found matching "kube-proxy"
	I1019 17:14:25.892220  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1019 17:14:25.892280  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1019 17:14:25.921152  219832 cri.go:89] found id: "5fcaafbb6b2ff370f143bccc65a03c3cc8c597c0a3ccb77051d8f843f143626f"
	I1019 17:14:25.921172  219832 cri.go:89] found id: ""
	I1019 17:14:25.921180  219832 logs.go:282] 1 containers: [5fcaafbb6b2ff370f143bccc65a03c3cc8c597c0a3ccb77051d8f843f143626f]
	I1019 17:14:25.921225  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:14:25.925455  219832 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1019 17:14:25.925515  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1019 17:14:25.956164  219832 cri.go:89] found id: ""
	I1019 17:14:25.956196  219832 logs.go:282] 0 containers: []
	W1019 17:14:25.956207  219832 logs.go:284] No container was found matching "kindnet"
	I1019 17:14:25.956215  219832 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1019 17:14:25.956277  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1019 17:14:25.985615  219832 cri.go:89] found id: ""
	I1019 17:14:25.985646  219832 logs.go:282] 0 containers: []
	W1019 17:14:25.985657  219832 logs.go:284] No container was found matching "storage-provisioner"
	I1019 17:14:25.985666  219832 logs.go:123] Gathering logs for container status ...
	I1019 17:14:25.985708  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1019 17:14:26.020524  219832 logs.go:123] Gathering logs for kubelet ...
	I1019 17:14:26.020555  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1019 17:14:26.102707  219832 logs.go:123] Gathering logs for dmesg ...
	I1019 17:14:26.102737  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1019 17:14:26.117576  219832 logs.go:123] Gathering logs for describe nodes ...
	I1019 17:14:26.117608  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1019 17:14:26.182824  219832 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1019 17:14:26.182846  219832 logs.go:123] Gathering logs for kube-apiserver [f0e30211e9bc4bf09df46743958d90e765ea8f87bb974a2cd7fad5a9df90b0ca] ...
	I1019 17:14:26.182859  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f0e30211e9bc4bf09df46743958d90e765ea8f87bb974a2cd7fad5a9df90b0ca"
	I1019 17:14:26.218120  219832 logs.go:123] Gathering logs for kube-scheduler [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f] ...
	I1019 17:14:26.218158  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:14:26.272450  219832 logs.go:123] Gathering logs for kube-controller-manager [5fcaafbb6b2ff370f143bccc65a03c3cc8c597c0a3ccb77051d8f843f143626f] ...
	I1019 17:14:26.272489  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5fcaafbb6b2ff370f143bccc65a03c3cc8c597c0a3ccb77051d8f843f143626f"
	I1019 17:14:26.301474  219832 logs.go:123] Gathering logs for CRI-O ...
	I1019 17:14:26.301502  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1019 17:14:27.261337  241081 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1019 17:14:27.261360  241081 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1019 17:14:27.261430  241081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-904967
	I1019 17:14:27.261634  241081 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:14:27.261693  241081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 17:14:27.261755  241081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-904967
	I1019 17:14:27.298740  241081 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 17:14:27.298773  241081 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 17:14:27.298832  241081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-904967
	I1019 17:14:27.299351  241081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/old-k8s-version-904967/id_rsa Username:docker}
	I1019 17:14:27.305761  241081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/old-k8s-version-904967/id_rsa Username:docker}
	I1019 17:14:27.331542  241081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/old-k8s-version-904967/id_rsa Username:docker}
	I1019 17:14:27.385943  241081 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:14:27.400305  241081 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-904967" to be "Ready" ...
	I1019 17:14:27.419038  241081 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1019 17:14:27.419078  241081 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1019 17:14:27.422912  241081 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:14:27.434666  241081 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1019 17:14:27.434695  241081 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1019 17:14:27.443297  241081 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 17:14:27.450603  241081 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1019 17:14:27.450631  241081 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1019 17:14:27.467527  241081 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1019 17:14:27.467555  241081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1019 17:14:27.486376  241081 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1019 17:14:27.486402  241081 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1019 17:14:27.506285  241081 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1019 17:14:27.506315  241081 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1019 17:14:27.522967  241081 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1019 17:14:27.522996  241081 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1019 17:14:27.539591  241081 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1019 17:14:27.539624  241081 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1019 17:14:27.553668  241081 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1019 17:14:27.553695  241081 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1019 17:14:27.569501  241081 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1019 17:14:29.077462  241081 node_ready.go:49] node "old-k8s-version-904967" is "Ready"
	I1019 17:14:29.077506  241081 node_ready.go:38] duration metric: took 1.67716751s for node "old-k8s-version-904967" to be "Ready" ...
	I1019 17:14:29.077524  241081 api_server.go:52] waiting for apiserver process to appear ...
	I1019 17:14:29.077583  241081 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 17:14:29.922766  241081 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.499808558s)
	I1019 17:14:29.922853  241081 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.479529281s)
	I1019 17:14:30.352288  241081 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.78271628s)
	I1019 17:14:30.352339  241081 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.274735886s)
	I1019 17:14:30.352371  241081 api_server.go:72] duration metric: took 3.122929023s to wait for apiserver process to appear ...
	I1019 17:14:30.352453  241081 api_server.go:88] waiting for apiserver healthz status ...
	I1019 17:14:30.352474  241081 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1019 17:14:30.356331  241081 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-904967 addons enable metrics-server
	
	I1019 17:14:30.357226  241081 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1019 17:14:30.358828  241081 api_server.go:141] control plane version: v1.28.0
	I1019 17:14:30.358850  241081 api_server.go:131] duration metric: took 6.39022ms to wait for apiserver health ...
	I1019 17:14:30.358861  241081 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 17:14:30.360117  241081 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1019 17:14:28.854569  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:14:28.855144  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1019 17:14:28.855204  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1019 17:14:28.855264  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1019 17:14:28.885481  219832 cri.go:89] found id: "f0e30211e9bc4bf09df46743958d90e765ea8f87bb974a2cd7fad5a9df90b0ca"
	I1019 17:14:28.885504  219832 cri.go:89] found id: ""
	I1019 17:14:28.885514  219832 logs.go:282] 1 containers: [f0e30211e9bc4bf09df46743958d90e765ea8f87bb974a2cd7fad5a9df90b0ca]
	I1019 17:14:28.885568  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:14:28.889687  219832 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1019 17:14:28.889758  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1019 17:14:28.928750  219832 cri.go:89] found id: ""
	I1019 17:14:28.928786  219832 logs.go:282] 0 containers: []
	W1019 17:14:28.928796  219832 logs.go:284] No container was found matching "etcd"
	I1019 17:14:28.928803  219832 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1019 17:14:28.928857  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1019 17:14:28.973617  219832 cri.go:89] found id: ""
	I1019 17:14:28.973773  219832 logs.go:282] 0 containers: []
	W1019 17:14:28.973813  219832 logs.go:284] No container was found matching "coredns"
	I1019 17:14:28.973851  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1019 17:14:28.974013  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1019 17:14:29.005555  219832 cri.go:89] found id: "409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:14:29.005581  219832 cri.go:89] found id: ""
	I1019 17:14:29.005590  219832 logs.go:282] 1 containers: [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f]
	I1019 17:14:29.005649  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:14:29.009985  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1019 17:14:29.010132  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1019 17:14:29.038659  219832 cri.go:89] found id: ""
	I1019 17:14:29.038689  219832 logs.go:282] 0 containers: []
	W1019 17:14:29.038701  219832 logs.go:284] No container was found matching "kube-proxy"
	I1019 17:14:29.038737  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1019 17:14:29.038790  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1019 17:14:29.090262  219832 cri.go:89] found id: "5fcaafbb6b2ff370f143bccc65a03c3cc8c597c0a3ccb77051d8f843f143626f"
	I1019 17:14:29.090289  219832 cri.go:89] found id: ""
	I1019 17:14:29.090297  219832 logs.go:282] 1 containers: [5fcaafbb6b2ff370f143bccc65a03c3cc8c597c0a3ccb77051d8f843f143626f]
	I1019 17:14:29.090353  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:14:29.096802  219832 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1019 17:14:29.096882  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1019 17:14:29.148419  219832 cri.go:89] found id: ""
	I1019 17:14:29.148449  219832 logs.go:282] 0 containers: []
	W1019 17:14:29.148460  219832 logs.go:284] No container was found matching "kindnet"
	I1019 17:14:29.148469  219832 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1019 17:14:29.148529  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1019 17:14:29.183322  219832 cri.go:89] found id: ""
	I1019 17:14:29.183351  219832 logs.go:282] 0 containers: []
	W1019 17:14:29.183361  219832 logs.go:284] No container was found matching "storage-provisioner"
	I1019 17:14:29.183373  219832 logs.go:123] Gathering logs for container status ...
	I1019 17:14:29.183388  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1019 17:14:29.218365  219832 logs.go:123] Gathering logs for kubelet ...
	I1019 17:14:29.218397  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1019 17:14:29.311019  219832 logs.go:123] Gathering logs for dmesg ...
	I1019 17:14:29.311079  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1019 17:14:29.332893  219832 logs.go:123] Gathering logs for describe nodes ...
	I1019 17:14:29.332935  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1019 17:14:29.399826  219832 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1019 17:14:29.399854  219832 logs.go:123] Gathering logs for kube-apiserver [f0e30211e9bc4bf09df46743958d90e765ea8f87bb974a2cd7fad5a9df90b0ca] ...
	I1019 17:14:29.399871  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f0e30211e9bc4bf09df46743958d90e765ea8f87bb974a2cd7fad5a9df90b0ca"
	I1019 17:14:29.435670  219832 logs.go:123] Gathering logs for kube-scheduler [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f] ...
	I1019 17:14:29.435705  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:14:29.486686  219832 logs.go:123] Gathering logs for kube-controller-manager [5fcaafbb6b2ff370f143bccc65a03c3cc8c597c0a3ccb77051d8f843f143626f] ...
	I1019 17:14:29.486723  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5fcaafbb6b2ff370f143bccc65a03c3cc8c597c0a3ccb77051d8f843f143626f"
	I1019 17:14:29.520233  219832 logs.go:123] Gathering logs for CRI-O ...
	I1019 17:14:29.520260  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1019 17:14:32.079575  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	
	
	==> CRI-O <==
	Oct 19 17:14:23 no-preload-806996 crio[772]: time="2025-10-19T17:14:23.819144745Z" level=info msg="Starting container: 601a1cd4ea4407c123f94ab8324a0c537417acd8f8587f1be130dcea4f1ac6c5" id=9e032061-d73d-4642-8887-d9d960e41977 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 17:14:23 no-preload-806996 crio[772]: time="2025-10-19T17:14:23.821175199Z" level=info msg="Started container" PID=2949 containerID=601a1cd4ea4407c123f94ab8324a0c537417acd8f8587f1be130dcea4f1ac6c5 description=kube-system/coredns-66bc5c9577-s4dxw/coredns id=9e032061-d73d-4642-8887-d9d960e41977 name=/runtime.v1.RuntimeService/StartContainer sandboxID=af38950ed4111c3786a74fafeddcaf5cdad1cbec6da8d6fcd5a0f94ec2d9127b
	Oct 19 17:14:27 no-preload-806996 crio[772]: time="2025-10-19T17:14:27.260797023Z" level=info msg="Running pod sandbox: default/busybox/POD" id=6f26f8d7-4569-4623-92f4-f8c4997414a4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 17:14:27 no-preload-806996 crio[772]: time="2025-10-19T17:14:27.260929023Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:14:27 no-preload-806996 crio[772]: time="2025-10-19T17:14:27.269503638Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:fab091b643cd19f34e496c0438ada84b0a03af78bf4d9aec1337abd43aac9636 UID:fce9d63b-e499-49e5-92ea-520aaa56468e NetNS:/var/run/netns/1b4babba-1179-4110-ac86-15f77c98b8ed Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008b070}] Aliases:map[]}"
	Oct 19 17:14:27 no-preload-806996 crio[772]: time="2025-10-19T17:14:27.269552113Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 19 17:14:27 no-preload-806996 crio[772]: time="2025-10-19T17:14:27.294191639Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:fab091b643cd19f34e496c0438ada84b0a03af78bf4d9aec1337abd43aac9636 UID:fce9d63b-e499-49e5-92ea-520aaa56468e NetNS:/var/run/netns/1b4babba-1179-4110-ac86-15f77c98b8ed Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008b070}] Aliases:map[]}"
	Oct 19 17:14:27 no-preload-806996 crio[772]: time="2025-10-19T17:14:27.29440322Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 19 17:14:27 no-preload-806996 crio[772]: time="2025-10-19T17:14:27.298435355Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 19 17:14:27 no-preload-806996 crio[772]: time="2025-10-19T17:14:27.299733599Z" level=info msg="Ran pod sandbox fab091b643cd19f34e496c0438ada84b0a03af78bf4d9aec1337abd43aac9636 with infra container: default/busybox/POD" id=6f26f8d7-4569-4623-92f4-f8c4997414a4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 17:14:27 no-preload-806996 crio[772]: time="2025-10-19T17:14:27.306121779Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7e1ba39b-fc62-4608-b600-150b38168a22 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:14:27 no-preload-806996 crio[772]: time="2025-10-19T17:14:27.306272179Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=7e1ba39b-fc62-4608-b600-150b38168a22 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:14:27 no-preload-806996 crio[772]: time="2025-10-19T17:14:27.306315871Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=7e1ba39b-fc62-4608-b600-150b38168a22 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:14:27 no-preload-806996 crio[772]: time="2025-10-19T17:14:27.307007398Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4a02b114-0cdd-40a8-95d9-71ee64f75400 name=/runtime.v1.ImageService/PullImage
	Oct 19 17:14:27 no-preload-806996 crio[772]: time="2025-10-19T17:14:27.310449491Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 19 17:14:28 no-preload-806996 crio[772]: time="2025-10-19T17:14:28.02568583Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=4a02b114-0cdd-40a8-95d9-71ee64f75400 name=/runtime.v1.ImageService/PullImage
	Oct 19 17:14:28 no-preload-806996 crio[772]: time="2025-10-19T17:14:28.026378933Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4f49ed54-f75c-4888-b1c6-e6318df4e0f0 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:14:28 no-preload-806996 crio[772]: time="2025-10-19T17:14:28.027832551Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1fae87da-6c7b-4f4b-8426-3c025d079502 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:14:28 no-preload-806996 crio[772]: time="2025-10-19T17:14:28.031146326Z" level=info msg="Creating container: default/busybox/busybox" id=6499e773-2af9-4bdc-ade2-f7a6343c95bd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:14:28 no-preload-806996 crio[772]: time="2025-10-19T17:14:28.03192612Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:14:28 no-preload-806996 crio[772]: time="2025-10-19T17:14:28.035604887Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:14:28 no-preload-806996 crio[772]: time="2025-10-19T17:14:28.03618555Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:14:28 no-preload-806996 crio[772]: time="2025-10-19T17:14:28.068508946Z" level=info msg="Created container 9dc91f4627236c5eb7de7f73b224ed0a715a328f46732282911593589fdac453: default/busybox/busybox" id=6499e773-2af9-4bdc-ade2-f7a6343c95bd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:14:28 no-preload-806996 crio[772]: time="2025-10-19T17:14:28.069240836Z" level=info msg="Starting container: 9dc91f4627236c5eb7de7f73b224ed0a715a328f46732282911593589fdac453" id=60425d93-cca4-4354-99a6-03034a29fda6 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 17:14:28 no-preload-806996 crio[772]: time="2025-10-19T17:14:28.071019785Z" level=info msg="Started container" PID=3028 containerID=9dc91f4627236c5eb7de7f73b224ed0a715a328f46732282911593589fdac453 description=default/busybox/busybox id=60425d93-cca4-4354-99a6-03034a29fda6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fab091b643cd19f34e496c0438ada84b0a03af78bf4d9aec1337abd43aac9636
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	9dc91f4627236       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   fab091b643cd1       busybox                                     default
	601a1cd4ea440       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 seconds ago      Running             coredns                   0                   af38950ed4111       coredns-66bc5c9577-s4dxw                    kube-system
	da4b850c72ee7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   4d3d61acab7fa       storage-provisioner                         kube-system
	5e173c9ef145b       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    22 seconds ago      Running             kindnet-cni               0                   3753e311775ac       kindnet-zndcx                               kube-system
	59e18a5014096       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      24 seconds ago      Running             kube-proxy                0                   5b943cf3c5b8c       kube-proxy-76f5v                            kube-system
	29780db3ed9d6       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      35 seconds ago      Running             kube-apiserver            0                   ffa49b5e330be       kube-apiserver-no-preload-806996            kube-system
	89337a7dc188c       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      35 seconds ago      Running             kube-scheduler            0                   e5662d24a77cf       kube-scheduler-no-preload-806996            kube-system
	50de34a344471       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      35 seconds ago      Running             etcd                      0                   90638d54c6780       etcd-no-preload-806996                      kube-system
	a6b0c011b0b07       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      35 seconds ago      Running             kube-controller-manager   0                   3813f5fcd9575       kube-controller-manager-no-preload-806996   kube-system
	
	
	==> coredns [601a1cd4ea4407c123f94ab8324a0c537417acd8f8587f1be130dcea4f1ac6c5] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44626 - 29904 "HINFO IN 5691615529475049052.5690700467562952910. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.488160727s
	
	
	==> describe nodes <==
	Name:               no-preload-806996
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-806996
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34
	                    minikube.k8s.io/name=no-preload-806996
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T17_14_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 17:14:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-806996
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 17:14:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 17:14:35 +0000   Sun, 19 Oct 2025 17:14:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 17:14:35 +0000   Sun, 19 Oct 2025 17:14:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 17:14:35 +0000   Sun, 19 Oct 2025 17:14:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 17:14:35 +0000   Sun, 19 Oct 2025 17:14:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-806996
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                18a9e783-21eb-4794-bbc4-d787e21fb79d
	  Boot ID:                    74c6737e-3c93-45ee-b810-4373d2d70b9b
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-s4dxw                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-no-preload-806996                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-zndcx                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-no-preload-806996             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-no-preload-806996    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-76f5v                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-no-preload-806996             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 24s                kube-proxy       
	  Normal  NodeHasSufficientMemory  36s (x8 over 36s)  kubelet          Node no-preload-806996 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s (x8 over 36s)  kubelet          Node no-preload-806996 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s (x8 over 36s)  kubelet          Node no-preload-806996 status is now: NodeHasSufficientPID
	  Normal  Starting                 31s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  31s                kubelet          Node no-preload-806996 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s                kubelet          Node no-preload-806996 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s                kubelet          Node no-preload-806996 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s                node-controller  Node no-preload-806996 event: Registered Node no-preload-806996 in Controller
	  Normal  NodeReady                12s                kubelet          Node no-preload-806996 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.102617] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027869] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.458770] kauditd_printk_skb: 47 callbacks suppressed
	[Oct19 16:23] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.054620] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023908] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023878] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023848] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023943] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +2.047764] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +4.031517] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +8.127172] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[ +16.382276] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[Oct19 16:24] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	
	
	==> etcd [50de34a3444710ba29d372db6c13a3e1bf7b7311fc8036323737f7220474c4d0] <==
	{"level":"warn","ts":"2025-10-19T17:14:00.956862Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:14:00.963767Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:14:00.972717Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:14:00.981887Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:14:00.990039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:14:00.997407Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:14:01.005339Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:14:01.013150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:14:01.020913Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:14:01.028897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:14:01.035485Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:14:01.043463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:14:01.051288Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:14:01.060002Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:14:01.067686Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:14:01.076452Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:14:01.084824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:14:01.092840Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:14:01.100624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:14:01.115935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:14:01.124167Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:14:01.134342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:14:01.184624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:14:03.044454Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"183.693887ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356069988177697 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterroles/cluster-admin\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/cluster-admin\" value_size:496 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-10-19T17:14:03.044575Z","caller":"traceutil/trace.go:172","msg":"trace[870362536] transaction","detail":"{read_only:false; response_revision:105; number_of_response:1; }","duration":"253.666099ms","start":"2025-10-19T17:14:02.790892Z","end":"2025-10-19T17:14:03.044558Z","steps":["trace[870362536] 'process raft request'  (duration: 69.39828ms)","trace[870362536] 'compare'  (duration: 183.560136ms)"],"step_count":2}
	
	
	==> kernel <==
	 17:14:35 up 57 min,  0 user,  load average: 2.43, 2.60, 1.58
	Linux no-preload-806996 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5e173c9ef145b5deefedd502e47203fcd336a00745a92cf356aa7f9e7b6e813e] <==
	I1019 17:14:12.993637       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 17:14:12.993964       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1019 17:14:12.994157       1 main.go:148] setting mtu 1500 for CNI 
	I1019 17:14:12.994176       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 17:14:12.994196       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T17:14:13Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 17:14:13.221484       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 17:14:13.221832       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 17:14:13.222046       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 17:14:13.292809       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1019 17:14:13.622605       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 17:14:13.622631       1 metrics.go:72] Registering metrics
	I1019 17:14:13.622710       1 controller.go:711] "Syncing nftables rules"
	I1019 17:14:23.224219       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1019 17:14:23.224290       1 main.go:301] handling current node
	I1019 17:14:33.224914       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1019 17:14:33.224958       1 main.go:301] handling current node
	
	
	==> kube-apiserver [29780db3ed9d69f41c63886fe36663c687da945abc1b6284d5ea6eaef0ae2788] <==
	E1019 17:14:01.805454       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1019 17:14:01.854281       1 controller.go:667] quota admission added evaluator for: namespaces
	I1019 17:14:01.859857       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 17:14:01.860153       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1019 17:14:01.868220       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1019 17:14:01.868344       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 17:14:01.950754       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 17:14:02.721912       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1019 17:14:02.783270       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1019 17:14:02.783293       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 17:14:03.625704       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 17:14:03.669420       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 17:14:03.763563       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1019 17:14:03.770417       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1019 17:14:03.771563       1 controller.go:667] quota admission added evaluator for: endpoints
	I1019 17:14:03.776389       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 17:14:04.703839       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 17:14:04.794974       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1019 17:14:04.804456       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1019 17:14:04.811945       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1019 17:14:10.409169       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 17:14:10.413936       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 17:14:10.506803       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1019 17:14:10.556567       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1019 17:14:34.045158       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:41666: use of closed network connection
	
	
	==> kube-controller-manager [a6b0c011b0b075792d7b0570724279f7251149cf7076c248735eb9805d9075be] <==
	I1019 17:14:09.670655       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1019 17:14:09.676903       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1019 17:14:09.684229       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1019 17:14:09.695519       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1019 17:14:09.702141       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1019 17:14:09.703355       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 17:14:09.703360       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1019 17:14:09.703371       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1019 17:14:09.703380       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1019 17:14:09.703506       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1019 17:14:09.703526       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1019 17:14:09.703563       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1019 17:14:09.704462       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1019 17:14:09.704521       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1019 17:14:09.704524       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1019 17:14:09.704528       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1019 17:14:09.704556       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1019 17:14:09.705857       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1019 17:14:09.705891       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1019 17:14:09.705970       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1019 17:14:09.707127       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1019 17:14:09.708479       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1019 17:14:09.710784       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 17:14:09.727092       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 17:14:24.656659       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [59e18a5014096fdb64052be6eedd6adbdee8ebfdcab9b37a4a73a7cba231b82f] <==
	I1019 17:14:10.977122       1 server_linux.go:53] "Using iptables proxy"
	I1019 17:14:11.037259       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 17:14:11.137769       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 17:14:11.137802       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1019 17:14:11.137883       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 17:14:11.157689       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 17:14:11.157742       1 server_linux.go:132] "Using iptables Proxier"
	I1019 17:14:11.163453       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 17:14:11.163762       1 server.go:527] "Version info" version="v1.34.1"
	I1019 17:14:11.163780       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:14:11.165151       1 config.go:106] "Starting endpoint slice config controller"
	I1019 17:14:11.165171       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 17:14:11.165165       1 config.go:200] "Starting service config controller"
	I1019 17:14:11.165215       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 17:14:11.165222       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 17:14:11.165230       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 17:14:11.165321       1 config.go:309] "Starting node config controller"
	I1019 17:14:11.165333       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 17:14:11.265341       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1019 17:14:11.265350       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 17:14:11.265412       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 17:14:11.265431       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [89337a7dc188cc84bacfb0624ebc71200b7415f38b47f5c544f5e00266cfafeb] <==
	E1019 17:14:01.719948       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1019 17:14:01.720049       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1019 17:14:01.720193       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1019 17:14:01.720200       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1019 17:14:01.720253       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1019 17:14:02.558233       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1019 17:14:02.621079       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1019 17:14:02.633413       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1019 17:14:02.676808       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1019 17:14:02.693102       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1019 17:14:02.761062       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1019 17:14:02.765375       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 17:14:02.786592       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1019 17:14:02.825677       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1019 17:14:02.899490       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1019 17:14:02.915028       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1019 17:14:02.915027       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1019 17:14:02.935622       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1019 17:14:03.049264       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1019 17:14:03.058520       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1019 17:14:03.124153       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1019 17:14:03.219713       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1019 17:14:03.275241       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1019 17:14:03.317956       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	I1019 17:14:05.315788       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 19 17:14:05 no-preload-806996 kubelet[2320]: I1019 17:14:05.665689    2320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-no-preload-806996" podStartSLOduration=1.665664094 podStartE2EDuration="1.665664094s" podCreationTimestamp="2025-10-19 17:14:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:14:05.655158262 +0000 UTC m=+1.126096758" watchObservedRunningTime="2025-10-19 17:14:05.665664094 +0000 UTC m=+1.136602571"
	Oct 19 17:14:05 no-preload-806996 kubelet[2320]: I1019 17:14:05.676804    2320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-no-preload-806996" podStartSLOduration=1.676783556 podStartE2EDuration="1.676783556s" podCreationTimestamp="2025-10-19 17:14:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:14:05.665624621 +0000 UTC m=+1.136563095" watchObservedRunningTime="2025-10-19 17:14:05.676783556 +0000 UTC m=+1.147722036"
	Oct 19 17:14:05 no-preload-806996 kubelet[2320]: I1019 17:14:05.692540    2320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-806996" podStartSLOduration=1.69251643 podStartE2EDuration="1.69251643s" podCreationTimestamp="2025-10-19 17:14:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:14:05.677208148 +0000 UTC m=+1.148146663" watchObservedRunningTime="2025-10-19 17:14:05.69251643 +0000 UTC m=+1.163454904"
	Oct 19 17:14:09 no-preload-806996 kubelet[2320]: I1019 17:14:09.642282    2320 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 19 17:14:09 no-preload-806996 kubelet[2320]: I1019 17:14:09.642962    2320 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 19 17:14:10 no-preload-806996 kubelet[2320]: I1019 17:14:10.641086    2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/80cf4856-a9c9-4c35-847d-a1d94f45adc1-kube-proxy\") pod \"kube-proxy-76f5v\" (UID: \"80cf4856-a9c9-4c35-847d-a1d94f45adc1\") " pod="kube-system/kube-proxy-76f5v"
	Oct 19 17:14:10 no-preload-806996 kubelet[2320]: I1019 17:14:10.641142    2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8dg5\" (UniqueName: \"kubernetes.io/projected/80cf4856-a9c9-4c35-847d-a1d94f45adc1-kube-api-access-g8dg5\") pod \"kube-proxy-76f5v\" (UID: \"80cf4856-a9c9-4c35-847d-a1d94f45adc1\") " pod="kube-system/kube-proxy-76f5v"
	Oct 19 17:14:10 no-preload-806996 kubelet[2320]: I1019 17:14:10.641169    2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1b78605a-6dea-41a1-9322-0042dffaf352-lib-modules\") pod \"kindnet-zndcx\" (UID: \"1b78605a-6dea-41a1-9322-0042dffaf352\") " pod="kube-system/kindnet-zndcx"
	Oct 19 17:14:10 no-preload-806996 kubelet[2320]: I1019 17:14:10.641194    2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/80cf4856-a9c9-4c35-847d-a1d94f45adc1-lib-modules\") pod \"kube-proxy-76f5v\" (UID: \"80cf4856-a9c9-4c35-847d-a1d94f45adc1\") " pod="kube-system/kube-proxy-76f5v"
	Oct 19 17:14:10 no-preload-806996 kubelet[2320]: I1019 17:14:10.641216    2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1b78605a-6dea-41a1-9322-0042dffaf352-cni-cfg\") pod \"kindnet-zndcx\" (UID: \"1b78605a-6dea-41a1-9322-0042dffaf352\") " pod="kube-system/kindnet-zndcx"
	Oct 19 17:14:10 no-preload-806996 kubelet[2320]: I1019 17:14:10.641239    2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1b78605a-6dea-41a1-9322-0042dffaf352-xtables-lock\") pod \"kindnet-zndcx\" (UID: \"1b78605a-6dea-41a1-9322-0042dffaf352\") " pod="kube-system/kindnet-zndcx"
	Oct 19 17:14:10 no-preload-806996 kubelet[2320]: I1019 17:14:10.641283    2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87h5f\" (UniqueName: \"kubernetes.io/projected/1b78605a-6dea-41a1-9322-0042dffaf352-kube-api-access-87h5f\") pod \"kindnet-zndcx\" (UID: \"1b78605a-6dea-41a1-9322-0042dffaf352\") " pod="kube-system/kindnet-zndcx"
	Oct 19 17:14:10 no-preload-806996 kubelet[2320]: I1019 17:14:10.641365    2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/80cf4856-a9c9-4c35-847d-a1d94f45adc1-xtables-lock\") pod \"kube-proxy-76f5v\" (UID: \"80cf4856-a9c9-4c35-847d-a1d94f45adc1\") " pod="kube-system/kube-proxy-76f5v"
	Oct 19 17:14:11 no-preload-806996 kubelet[2320]: I1019 17:14:11.659337    2320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-76f5v" podStartSLOduration=1.659310566 podStartE2EDuration="1.659310566s" podCreationTimestamp="2025-10-19 17:14:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:14:11.659133976 +0000 UTC m=+7.130072453" watchObservedRunningTime="2025-10-19 17:14:11.659310566 +0000 UTC m=+7.130249043"
	Oct 19 17:14:15 no-preload-806996 kubelet[2320]: I1019 17:14:15.189180    2320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-zndcx" podStartSLOduration=3.335611834 podStartE2EDuration="5.18915758s" podCreationTimestamp="2025-10-19 17:14:10 +0000 UTC" firstStartedPulling="2025-10-19 17:14:10.893705343 +0000 UTC m=+6.364643803" lastFinishedPulling="2025-10-19 17:14:12.747251088 +0000 UTC m=+8.218189549" observedRunningTime="2025-10-19 17:14:13.668921247 +0000 UTC m=+9.139859735" watchObservedRunningTime="2025-10-19 17:14:15.18915758 +0000 UTC m=+10.660096057"
	Oct 19 17:14:23 no-preload-806996 kubelet[2320]: I1019 17:14:23.436573    2320 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 19 17:14:23 no-preload-806996 kubelet[2320]: I1019 17:14:23.538970    2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5t68\" (UniqueName: \"kubernetes.io/projected/c70058e7-34d5-4394-843b-329c5916e0d2-kube-api-access-d5t68\") pod \"coredns-66bc5c9577-s4dxw\" (UID: \"c70058e7-34d5-4394-843b-329c5916e0d2\") " pod="kube-system/coredns-66bc5c9577-s4dxw"
	Oct 19 17:14:23 no-preload-806996 kubelet[2320]: I1019 17:14:23.539029    2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c70058e7-34d5-4394-843b-329c5916e0d2-config-volume\") pod \"coredns-66bc5c9577-s4dxw\" (UID: \"c70058e7-34d5-4394-843b-329c5916e0d2\") " pod="kube-system/coredns-66bc5c9577-s4dxw"
	Oct 19 17:14:23 no-preload-806996 kubelet[2320]: I1019 17:14:23.539151    2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/464b7dd6-a5a2-44da-97ba-d2ba712ff9cd-tmp\") pod \"storage-provisioner\" (UID: \"464b7dd6-a5a2-44da-97ba-d2ba712ff9cd\") " pod="kube-system/storage-provisioner"
	Oct 19 17:14:23 no-preload-806996 kubelet[2320]: I1019 17:14:23.539205    2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctkbh\" (UniqueName: \"kubernetes.io/projected/464b7dd6-a5a2-44da-97ba-d2ba712ff9cd-kube-api-access-ctkbh\") pod \"storage-provisioner\" (UID: \"464b7dd6-a5a2-44da-97ba-d2ba712ff9cd\") " pod="kube-system/storage-provisioner"
	Oct 19 17:14:24 no-preload-806996 kubelet[2320]: I1019 17:14:24.695657    2320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-s4dxw" podStartSLOduration=14.695633447 podStartE2EDuration="14.695633447s" podCreationTimestamp="2025-10-19 17:14:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:14:24.695292289 +0000 UTC m=+20.166230784" watchObservedRunningTime="2025-10-19 17:14:24.695633447 +0000 UTC m=+20.166571924"
	Oct 19 17:14:24 no-preload-806996 kubelet[2320]: I1019 17:14:24.707729    2320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.707678543 podStartE2EDuration="13.707678543s" podCreationTimestamp="2025-10-19 17:14:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:14:24.706440989 +0000 UTC m=+20.177379493" watchObservedRunningTime="2025-10-19 17:14:24.707678543 +0000 UTC m=+20.178617016"
	Oct 19 17:14:27 no-preload-806996 kubelet[2320]: I1019 17:14:27.062850    2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bdf8\" (UniqueName: \"kubernetes.io/projected/fce9d63b-e499-49e5-92ea-520aaa56468e-kube-api-access-9bdf8\") pod \"busybox\" (UID: \"fce9d63b-e499-49e5-92ea-520aaa56468e\") " pod="default/busybox"
	Oct 19 17:14:28 no-preload-806996 kubelet[2320]: I1019 17:14:28.707439    2320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.986717729 podStartE2EDuration="2.707420373s" podCreationTimestamp="2025-10-19 17:14:26 +0000 UTC" firstStartedPulling="2025-10-19 17:14:27.306557831 +0000 UTC m=+22.777496302" lastFinishedPulling="2025-10-19 17:14:28.027260477 +0000 UTC m=+23.498198946" observedRunningTime="2025-10-19 17:14:28.707277122 +0000 UTC m=+24.178215626" watchObservedRunningTime="2025-10-19 17:14:28.707420373 +0000 UTC m=+24.178358849"
	Oct 19 17:14:34 no-preload-806996 kubelet[2320]: E1019 17:14:34.045057    2320 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:41876->127.0.0.1:46265: write tcp 127.0.0.1:41876->127.0.0.1:46265: write: broken pipe
	
	
	==> storage-provisioner [da4b850c72ee75327d35bd28943ebb113870b8a553258405897d591a16449e5a] <==
	I1019 17:14:23.828931       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1019 17:14:23.838505       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1019 17:14:23.838569       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1019 17:14:23.841153       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:14:23.847324       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 17:14:23.847507       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1019 17:14:23.847730       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-806996_77f83806-f188-488b-9506-3f4e36cc8da5!
	I1019 17:14:23.848293       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ade808a3-50b3-4da9-9740-0f1294aa75ce", APIVersion:"v1", ResourceVersion:"445", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-806996_77f83806-f188-488b-9506-3f4e36cc8da5 became leader
	W1019 17:14:23.849971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:14:23.853656       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 17:14:23.948660       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-806996_77f83806-f188-488b-9506-3f4e36cc8da5!
	W1019 17:14:25.858452       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:14:25.863132       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:14:27.866321       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:14:27.870233       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:14:29.873267       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:14:29.878308       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:14:31.881865       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:14:31.885548       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:14:33.889364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:14:33.893423       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-806996 -n no-preload-806996
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-806996 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (5.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-904967 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-904967 --alsologtostderr -v=1: exit status 80 (1.982685s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-904967 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 17:15:22.688792  253120 out.go:360] Setting OutFile to fd 1 ...
	I1019 17:15:22.689047  253120 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:15:22.689055  253120 out.go:374] Setting ErrFile to fd 2...
	I1019 17:15:22.689059  253120 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:15:22.689281  253120 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3731/.minikube/bin
	I1019 17:15:22.689536  253120 out.go:368] Setting JSON to false
	I1019 17:15:22.689567  253120 mustload.go:66] Loading cluster: old-k8s-version-904967
	I1019 17:15:22.689924  253120 config.go:182] Loaded profile config "old-k8s-version-904967": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1019 17:15:22.690510  253120 cli_runner.go:164] Run: docker container inspect old-k8s-version-904967 --format={{.State.Status}}
	I1019 17:15:22.719181  253120 host.go:66] Checking if "old-k8s-version-904967" exists ...
	I1019 17:15:22.719624  253120 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:15:22.791335  253120 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:87 SystemTime:2025-10-19 17:15:22.778868262 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 17:15:22.791950  253120 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-904967 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1019 17:15:22.794634  253120 out.go:179] * Pausing node old-k8s-version-904967 ... 
	I1019 17:15:22.795990  253120 host.go:66] Checking if "old-k8s-version-904967" exists ...
	I1019 17:15:22.796313  253120 ssh_runner.go:195] Run: systemctl --version
	I1019 17:15:22.796355  253120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-904967
	I1019 17:15:22.821139  253120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/old-k8s-version-904967/id_rsa Username:docker}
	I1019 17:15:22.923100  253120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:15:22.956487  253120 pause.go:52] kubelet running: true
	I1019 17:15:22.956586  253120 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 17:15:23.149853  253120 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 17:15:23.149947  253120 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 17:15:23.224759  253120 cri.go:89] found id: "6977bb31ffcd6d22facb3755db7eb620c00759ea8377876599a469f0fa5f01e1"
	I1019 17:15:23.224787  253120 cri.go:89] found id: "ce94b6419b2c4ac1db095de413bd1d82939921cfe884c2239c2cb800683b9fc5"
	I1019 17:15:23.224793  253120 cri.go:89] found id: "f5b580c231276ddf60d434e3d348c303152e46cc277722125030d8e76cb3335e"
	I1019 17:15:23.224799  253120 cri.go:89] found id: "55c6a978b088cdf7358bab39ddcabd75fc5780747290a484f984a56f7a86398c"
	I1019 17:15:23.224803  253120 cri.go:89] found id: "1cb477f3e2b8baf572ed7209b429278d823d78e9b46164608b3a173129ae017e"
	I1019 17:15:23.224807  253120 cri.go:89] found id: "d585a77a4eff398d568fbaf843dc59dc0a8f11ceece1172b1b6499be37a6bc8c"
	I1019 17:15:23.224811  253120 cri.go:89] found id: "f8fee443a165e8c94dbca458d7be0af55ddfb347583a529bb18135d08cf99cda"
	I1019 17:15:23.224815  253120 cri.go:89] found id: "78ff50c78f7cce6ccee8c1e7478bfa6937ce35b306cb412c85a9d2a83a64face"
	I1019 17:15:23.224819  253120 cri.go:89] found id: "783eeba3fb702b2ab824254b8901f2f139f59ef0c6c596fed9712ff31faef63f"
	I1019 17:15:23.224835  253120 cri.go:89] found id: "d09ce49842899f8553d55483ba7991569651a6a48f0c338ad78e1055a5625a3d"
	I1019 17:15:23.224843  253120 cri.go:89] found id: "1440d21cef285c712b1fd8cf829a2eb24f00c65d5e80452b50e3a10b8d8f3aa5"
	I1019 17:15:23.224847  253120 cri.go:89] found id: ""
	I1019 17:15:23.224899  253120 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 17:15:23.237095  253120 retry.go:31] will retry after 322.441022ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:15:23Z" level=error msg="open /run/runc: no such file or directory"
	I1019 17:15:23.560658  253120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:15:23.575244  253120 pause.go:52] kubelet running: false
	I1019 17:15:23.575303  253120 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 17:15:23.737847  253120 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 17:15:23.737930  253120 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 17:15:23.810452  253120 cri.go:89] found id: "6977bb31ffcd6d22facb3755db7eb620c00759ea8377876599a469f0fa5f01e1"
	I1019 17:15:23.810476  253120 cri.go:89] found id: "ce94b6419b2c4ac1db095de413bd1d82939921cfe884c2239c2cb800683b9fc5"
	I1019 17:15:23.810481  253120 cri.go:89] found id: "f5b580c231276ddf60d434e3d348c303152e46cc277722125030d8e76cb3335e"
	I1019 17:15:23.810484  253120 cri.go:89] found id: "55c6a978b088cdf7358bab39ddcabd75fc5780747290a484f984a56f7a86398c"
	I1019 17:15:23.810487  253120 cri.go:89] found id: "1cb477f3e2b8baf572ed7209b429278d823d78e9b46164608b3a173129ae017e"
	I1019 17:15:23.810491  253120 cri.go:89] found id: "d585a77a4eff398d568fbaf843dc59dc0a8f11ceece1172b1b6499be37a6bc8c"
	I1019 17:15:23.810493  253120 cri.go:89] found id: "f8fee443a165e8c94dbca458d7be0af55ddfb347583a529bb18135d08cf99cda"
	I1019 17:15:23.810495  253120 cri.go:89] found id: "78ff50c78f7cce6ccee8c1e7478bfa6937ce35b306cb412c85a9d2a83a64face"
	I1019 17:15:23.810503  253120 cri.go:89] found id: "783eeba3fb702b2ab824254b8901f2f139f59ef0c6c596fed9712ff31faef63f"
	I1019 17:15:23.810509  253120 cri.go:89] found id: "d09ce49842899f8553d55483ba7991569651a6a48f0c338ad78e1055a5625a3d"
	I1019 17:15:23.810512  253120 cri.go:89] found id: "1440d21cef285c712b1fd8cf829a2eb24f00c65d5e80452b50e3a10b8d8f3aa5"
	I1019 17:15:23.810514  253120 cri.go:89] found id: ""
	I1019 17:15:23.810550  253120 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 17:15:23.827742  253120 retry.go:31] will retry after 529.414201ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:15:23Z" level=error msg="open /run/runc: no such file or directory"
	I1019 17:15:24.358283  253120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:15:24.372439  253120 pause.go:52] kubelet running: false
	I1019 17:15:24.372501  253120 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 17:15:24.516345  253120 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 17:15:24.516433  253120 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 17:15:24.590057  253120 cri.go:89] found id: "6977bb31ffcd6d22facb3755db7eb620c00759ea8377876599a469f0fa5f01e1"
	I1019 17:15:24.590095  253120 cri.go:89] found id: "ce94b6419b2c4ac1db095de413bd1d82939921cfe884c2239c2cb800683b9fc5"
	I1019 17:15:24.590102  253120 cri.go:89] found id: "f5b580c231276ddf60d434e3d348c303152e46cc277722125030d8e76cb3335e"
	I1019 17:15:24.590106  253120 cri.go:89] found id: "55c6a978b088cdf7358bab39ddcabd75fc5780747290a484f984a56f7a86398c"
	I1019 17:15:24.590110  253120 cri.go:89] found id: "1cb477f3e2b8baf572ed7209b429278d823d78e9b46164608b3a173129ae017e"
	I1019 17:15:24.590116  253120 cri.go:89] found id: "d585a77a4eff398d568fbaf843dc59dc0a8f11ceece1172b1b6499be37a6bc8c"
	I1019 17:15:24.590120  253120 cri.go:89] found id: "f8fee443a165e8c94dbca458d7be0af55ddfb347583a529bb18135d08cf99cda"
	I1019 17:15:24.590124  253120 cri.go:89] found id: "78ff50c78f7cce6ccee8c1e7478bfa6937ce35b306cb412c85a9d2a83a64face"
	I1019 17:15:24.590128  253120 cri.go:89] found id: "783eeba3fb702b2ab824254b8901f2f139f59ef0c6c596fed9712ff31faef63f"
	I1019 17:15:24.590137  253120 cri.go:89] found id: "d09ce49842899f8553d55483ba7991569651a6a48f0c338ad78e1055a5625a3d"
	I1019 17:15:24.590141  253120 cri.go:89] found id: "1440d21cef285c712b1fd8cf829a2eb24f00c65d5e80452b50e3a10b8d8f3aa5"
	I1019 17:15:24.590145  253120 cri.go:89] found id: ""
	I1019 17:15:24.590201  253120 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 17:15:24.604808  253120 out.go:203] 
	W1019 17:15:24.606389  253120 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:15:24Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:15:24Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 17:15:24.606410  253120 out.go:285] * 
	* 
	W1019 17:15:24.610802  253120 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 17:15:24.612518  253120 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-904967 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-904967
helpers_test.go:243: (dbg) docker inspect old-k8s-version-904967:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c0f82ef529f88e13692e76592a85ee8e2eb1cb4f227f77088f9bbbb0466db719",
	        "Created": "2025-10-19T17:13:07.590891639Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 241328,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T17:14:20.219769719Z",
	            "FinishedAt": "2025-10-19T17:14:19.276211912Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/c0f82ef529f88e13692e76592a85ee8e2eb1cb4f227f77088f9bbbb0466db719/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c0f82ef529f88e13692e76592a85ee8e2eb1cb4f227f77088f9bbbb0466db719/hostname",
	        "HostsPath": "/var/lib/docker/containers/c0f82ef529f88e13692e76592a85ee8e2eb1cb4f227f77088f9bbbb0466db719/hosts",
	        "LogPath": "/var/lib/docker/containers/c0f82ef529f88e13692e76592a85ee8e2eb1cb4f227f77088f9bbbb0466db719/c0f82ef529f88e13692e76592a85ee8e2eb1cb4f227f77088f9bbbb0466db719-json.log",
	        "Name": "/old-k8s-version-904967",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-904967:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-904967",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c0f82ef529f88e13692e76592a85ee8e2eb1cb4f227f77088f9bbbb0466db719",
	                "LowerDir": "/var/lib/docker/overlay2/305a170662898a69b3b459b30af2aee1e923f246f5b4b75beb501c15e4bfc402-init/diff:/var/lib/docker/overlay2/0516a6de822f96a429e000bb6523437ca93a66a6aecaba561c6abf45d0d1defe/diff",
	                "MergedDir": "/var/lib/docker/overlay2/305a170662898a69b3b459b30af2aee1e923f246f5b4b75beb501c15e4bfc402/merged",
	                "UpperDir": "/var/lib/docker/overlay2/305a170662898a69b3b459b30af2aee1e923f246f5b4b75beb501c15e4bfc402/diff",
	                "WorkDir": "/var/lib/docker/overlay2/305a170662898a69b3b459b30af2aee1e923f246f5b4b75beb501c15e4bfc402/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-904967",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-904967/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-904967",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-904967",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-904967",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "02522498c863643b1b09524efd2bf17f7a439b7df3897cb9cded8bdf4c7afe32",
	            "SandboxKey": "/var/run/docker/netns/02522498c863",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-904967": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:a9:14:07:e7:3f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cfc938debfe6129c7f9048a0a383817f7b1fb100af5f0af7c3b32f6517e76495",
	                    "EndpointID": "d529790b30aabcd542e1f896aaa4a643570519c05c29343bbe41c310fa0142ef",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-904967",
	                        "c0f82ef529f8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-904967 -n old-k8s-version-904967
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-904967 -n old-k8s-version-904967: exit status 2 (330.182724ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-904967 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-904967 logs -n 25: (1.200284752s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ cert-options-639932 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-639932       │ jenkins │ v1.37.0 │ 19 Oct 25 17:12 UTC │ 19 Oct 25 17:12 UTC │
	│ ssh     │ -p cert-options-639932 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-639932       │ jenkins │ v1.37.0 │ 19 Oct 25 17:12 UTC │ 19 Oct 25 17:12 UTC │
	│ delete  │ -p cert-options-639932                                                                                                                                                                                                                        │ cert-options-639932       │ jenkins │ v1.37.0 │ 19 Oct 25 17:12 UTC │ 19 Oct 25 17:12 UTC │
	│ start   │ -p kubernetes-upgrade-318879 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-318879 │ jenkins │ v1.37.0 │ 19 Oct 25 17:12 UTC │                     │
	│ start   │ -p missing-upgrade-447724 --memory=3072 --driver=docker  --container-runtime=crio                                                                                                                                                             │ missing-upgrade-447724    │ jenkins │ v1.32.0 │ 19 Oct 25 17:12 UTC │ 19 Oct 25 17:12 UTC │
	│ stop    │ stopped-upgrade-659566 stop                                                                                                                                                                                                                   │ stopped-upgrade-659566    │ jenkins │ v1.32.0 │ 19 Oct 25 17:12 UTC │ 19 Oct 25 17:12 UTC │
	│ start   │ -p stopped-upgrade-659566 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                      │ stopped-upgrade-659566    │ jenkins │ v1.37.0 │ 19 Oct 25 17:12 UTC │ 19 Oct 25 17:12 UTC │
	│ start   │ -p missing-upgrade-447724 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                      │ missing-upgrade-447724    │ jenkins │ v1.37.0 │ 19 Oct 25 17:12 UTC │ 19 Oct 25 17:13 UTC │
	│ delete  │ -p stopped-upgrade-659566                                                                                                                                                                                                                     │ stopped-upgrade-659566    │ jenkins │ v1.37.0 │ 19 Oct 25 17:12 UTC │ 19 Oct 25 17:13 UTC │
	│ start   │ -p old-k8s-version-904967 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-904967    │ jenkins │ v1.37.0 │ 19 Oct 25 17:13 UTC │ 19 Oct 25 17:13 UTC │
	│ delete  │ -p missing-upgrade-447724                                                                                                                                                                                                                     │ missing-upgrade-447724    │ jenkins │ v1.37.0 │ 19 Oct 25 17:13 UTC │ 19 Oct 25 17:13 UTC │
	│ start   │ -p no-preload-806996 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-806996         │ jenkins │ v1.37.0 │ 19 Oct 25 17:13 UTC │ 19 Oct 25 17:14 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-904967 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-904967    │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │                     │
	│ stop    │ -p old-k8s-version-904967 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-904967    │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │ 19 Oct 25 17:14 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-904967 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-904967    │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │ 19 Oct 25 17:14 UTC │
	│ start   │ -p old-k8s-version-904967 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-904967    │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │ 19 Oct 25 17:15 UTC │
	│ addons  │ enable metrics-server -p no-preload-806996 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-806996         │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │                     │
	│ stop    │ -p no-preload-806996 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-806996         │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │ 19 Oct 25 17:14 UTC │
	│ addons  │ enable dashboard -p no-preload-806996 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-806996         │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │ 19 Oct 25 17:14 UTC │
	│ start   │ -p no-preload-806996 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-806996         │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │                     │
	│ start   │ -p cert-expiration-132648 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-132648    │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ delete  │ -p cert-expiration-132648                                                                                                                                                                                                                     │ cert-expiration-132648    │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ start   │ -p embed-certs-090139 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-090139        │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │                     │
	│ image   │ old-k8s-version-904967 image list --format=json                                                                                                                                                                                               │ old-k8s-version-904967    │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ pause   │ -p old-k8s-version-904967 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-904967    │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 17:15:14
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 17:15:14.733521  251026 out.go:360] Setting OutFile to fd 1 ...
	I1019 17:15:14.733767  251026 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:15:14.733775  251026 out.go:374] Setting ErrFile to fd 2...
	I1019 17:15:14.733779  251026 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:15:14.733976  251026 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3731/.minikube/bin
	I1019 17:15:14.734500  251026 out.go:368] Setting JSON to false
	I1019 17:15:14.735721  251026 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3461,"bootTime":1760890654,"procs":347,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 17:15:14.735830  251026 start.go:143] virtualization: kvm guest
	I1019 17:15:14.737727  251026 out.go:179] * [embed-certs-090139] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 17:15:14.739393  251026 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 17:15:14.739387  251026 notify.go:221] Checking for updates...
	I1019 17:15:14.741689  251026 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 17:15:14.743334  251026 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-3731/kubeconfig
	I1019 17:15:14.744674  251026 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-3731/.minikube
	I1019 17:15:14.746044  251026 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 17:15:14.747304  251026 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 17:15:14.749177  251026 config.go:182] Loaded profile config "kubernetes-upgrade-318879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:15:14.749329  251026 config.go:182] Loaded profile config "no-preload-806996": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:15:14.749441  251026 config.go:182] Loaded profile config "old-k8s-version-904967": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1019 17:15:14.749574  251026 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 17:15:14.777339  251026 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1019 17:15:14.777418  251026 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:15:14.838046  251026 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-19 17:15:14.82692179 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 17:15:14.838210  251026 docker.go:319] overlay module found
	I1019 17:15:14.840575  251026 out.go:179] * Using the docker driver based on user configuration
	I1019 17:15:14.841801  251026 start.go:309] selected driver: docker
	I1019 17:15:14.841824  251026 start.go:930] validating driver "docker" against <nil>
	I1019 17:15:14.841840  251026 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 17:15:14.842508  251026 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:15:14.900903  251026 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-19 17:15:14.890943329 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 17:15:14.901115  251026 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1019 17:15:14.901335  251026 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 17:15:14.902903  251026 out.go:179] * Using Docker driver with root privileges
	I1019 17:15:14.903978  251026 cni.go:84] Creating CNI manager for ""
	I1019 17:15:14.904051  251026 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:15:14.904063  251026 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1019 17:15:14.904176  251026 start.go:353] cluster config:
	{Name:embed-certs-090139 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-090139 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:15:14.905605  251026 out.go:179] * Starting "embed-certs-090139" primary control-plane node in "embed-certs-090139" cluster
	I1019 17:15:14.906705  251026 cache.go:124] Beginning downloading kic base image for docker with crio
	I1019 17:15:14.907962  251026 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 17:15:14.909204  251026 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:15:14.909249  251026 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 17:15:14.909258  251026 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-3731/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1019 17:15:14.909282  251026 cache.go:59] Caching tarball of preloaded images
	I1019 17:15:14.909400  251026 preload.go:233] Found /home/jenkins/minikube-integration/21683-3731/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1019 17:15:14.909414  251026 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 17:15:14.909531  251026 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/embed-certs-090139/config.json ...
	I1019 17:15:14.909558  251026 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/embed-certs-090139/config.json: {Name:mkfc3a621c5d880f4560cce53d1586fc6ace20b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:15:14.931329  251026 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 17:15:14.931353  251026 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 17:15:14.931370  251026 cache.go:233] Successfully downloaded all kic artifacts
	I1019 17:15:14.931398  251026 start.go:360] acquireMachinesLock for embed-certs-090139: {Name:mkdaa028ca10b90b55fac4626a0f749931b30e56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 17:15:14.931532  251026 start.go:364] duration metric: took 110.563µs to acquireMachinesLock for "embed-certs-090139"
	I1019 17:15:14.931569  251026 start.go:93] Provisioning new machine with config: &{Name:embed-certs-090139 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-090139 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 17:15:14.931649  251026 start.go:125] createHost starting for "" (driver="docker")
	W1019 17:15:14.954538  245899 pod_ready.go:104] pod "coredns-66bc5c9577-s4dxw" is not "Ready", error: <nil>
	W1019 17:15:17.454802  245899 pod_ready.go:104] pod "coredns-66bc5c9577-s4dxw" is not "Ready", error: <nil>
	I1019 17:15:13.559052  219832 logs.go:123] Gathering logs for kube-controller-manager [fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa] ...
	I1019 17:15:13.559104  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa"
	I1019 17:15:16.089164  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:15:16.089656  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1019 17:15:16.089721  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1019 17:15:16.089779  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1019 17:15:16.123334  219832 cri.go:89] found id: "9069a5be85580c182da77b88270f9798850d1b399012f0273ac8911e2aca89dc"
	I1019 17:15:16.123361  219832 cri.go:89] found id: ""
	I1019 17:15:16.123372  219832 logs.go:282] 1 containers: [9069a5be85580c182da77b88270f9798850d1b399012f0273ac8911e2aca89dc]
	I1019 17:15:16.123429  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:15:16.127987  219832 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1019 17:15:16.128093  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1019 17:15:16.158797  219832 cri.go:89] found id: ""
	I1019 17:15:16.158824  219832 logs.go:282] 0 containers: []
	W1019 17:15:16.158835  219832 logs.go:284] No container was found matching "etcd"
	I1019 17:15:16.158846  219832 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1019 17:15:16.158910  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1019 17:15:16.187518  219832 cri.go:89] found id: ""
	I1019 17:15:16.187541  219832 logs.go:282] 0 containers: []
	W1019 17:15:16.187550  219832 logs.go:284] No container was found matching "coredns"
	I1019 17:15:16.187556  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1019 17:15:16.187613  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1019 17:15:16.215732  219832 cri.go:89] found id: "409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:15:16.215752  219832 cri.go:89] found id: ""
	I1019 17:15:16.215760  219832 logs.go:282] 1 containers: [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f]
	I1019 17:15:16.215815  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:15:16.219901  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1019 17:15:16.219960  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1019 17:15:16.251157  219832 cri.go:89] found id: ""
	I1019 17:15:16.251184  219832 logs.go:282] 0 containers: []
	W1019 17:15:16.251194  219832 logs.go:284] No container was found matching "kube-proxy"
	I1019 17:15:16.251202  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1019 17:15:16.251264  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1019 17:15:16.280233  219832 cri.go:89] found id: "fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa"
	I1019 17:15:16.280260  219832 cri.go:89] found id: ""
	I1019 17:15:16.280270  219832 logs.go:282] 1 containers: [fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa]
	I1019 17:15:16.280332  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:15:16.284541  219832 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1019 17:15:16.284611  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1019 17:15:16.313991  219832 cri.go:89] found id: ""
	I1019 17:15:16.314019  219832 logs.go:282] 0 containers: []
	W1019 17:15:16.314030  219832 logs.go:284] No container was found matching "kindnet"
	I1019 17:15:16.314038  219832 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1019 17:15:16.314110  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1019 17:15:16.343788  219832 cri.go:89] found id: ""
	I1019 17:15:16.343823  219832 logs.go:282] 0 containers: []
	W1019 17:15:16.343833  219832 logs.go:284] No container was found matching "storage-provisioner"
	I1019 17:15:16.343845  219832 logs.go:123] Gathering logs for dmesg ...
	I1019 17:15:16.343861  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1019 17:15:16.359868  219832 logs.go:123] Gathering logs for describe nodes ...
	I1019 17:15:16.359895  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1019 17:15:16.423803  219832 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1019 17:15:16.423829  219832 logs.go:123] Gathering logs for kube-apiserver [9069a5be85580c182da77b88270f9798850d1b399012f0273ac8911e2aca89dc] ...
	I1019 17:15:16.423843  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9069a5be85580c182da77b88270f9798850d1b399012f0273ac8911e2aca89dc"
	I1019 17:15:16.458933  219832 logs.go:123] Gathering logs for kube-scheduler [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f] ...
	I1019 17:15:16.458966  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:15:16.513239  219832 logs.go:123] Gathering logs for kube-controller-manager [fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa] ...
	I1019 17:15:16.513274  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa"
	I1019 17:15:16.543212  219832 logs.go:123] Gathering logs for CRI-O ...
	I1019 17:15:16.543250  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1019 17:15:16.594545  219832 logs.go:123] Gathering logs for container status ...
	I1019 17:15:16.594580  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1019 17:15:16.632790  219832 logs.go:123] Gathering logs for kubelet ...
	I1019 17:15:16.632820  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1019 17:15:14.934425  251026 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1019 17:15:14.934698  251026 start.go:159] libmachine.API.Create for "embed-certs-090139" (driver="docker")
	I1019 17:15:14.934733  251026 client.go:171] LocalClient.Create starting
	I1019 17:15:14.934818  251026 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem
	I1019 17:15:14.934857  251026 main.go:143] libmachine: Decoding PEM data...
	I1019 17:15:14.934880  251026 main.go:143] libmachine: Parsing certificate...
	I1019 17:15:14.934964  251026 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-3731/.minikube/certs/cert.pem
	I1019 17:15:14.934990  251026 main.go:143] libmachine: Decoding PEM data...
	I1019 17:15:14.935017  251026 main.go:143] libmachine: Parsing certificate...
	I1019 17:15:14.935472  251026 cli_runner.go:164] Run: docker network inspect embed-certs-090139 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1019 17:15:14.954362  251026 cli_runner.go:211] docker network inspect embed-certs-090139 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1019 17:15:14.954486  251026 network_create.go:284] running [docker network inspect embed-certs-090139] to gather additional debugging logs...
	I1019 17:15:14.954510  251026 cli_runner.go:164] Run: docker network inspect embed-certs-090139
	W1019 17:15:14.973209  251026 cli_runner.go:211] docker network inspect embed-certs-090139 returned with exit code 1
	I1019 17:15:14.973240  251026 network_create.go:287] error running [docker network inspect embed-certs-090139]: docker network inspect embed-certs-090139: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-090139 not found
	I1019 17:15:14.973263  251026 network_create.go:289] output of [docker network inspect embed-certs-090139]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-090139 not found
	
	** /stderr **
	I1019 17:15:14.973345  251026 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 17:15:14.992660  251026 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-96cf7041f267 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:7a:ea:91:e3:37:25} reservation:<nil>}
	I1019 17:15:14.993386  251026 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-0f2c415cfca9 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:96:f0:8a:e9:5f:de} reservation:<nil>}
	I1019 17:15:14.994106  251026 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ca739aebb768 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:a6:81:0d:b3:5e:ec} reservation:<nil>}
	I1019 17:15:14.994703  251026 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-73bac96357aa IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:fe:58:13:5a:d3:70} reservation:<nil>}
	I1019 17:15:14.995279  251026 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-cfc938debfe6 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:ce:b6:4f:40:e7:1c} reservation:<nil>}
	I1019 17:15:14.995698  251026 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-40c59d31eea2 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:12:d3:5e:b8:e0:1d} reservation:<nil>}
	I1019 17:15:14.996505  251026 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f22780}
	I1019 17:15:14.996542  251026 network_create.go:124] attempt to create docker network embed-certs-090139 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1019 17:15:14.996597  251026 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-090139 embed-certs-090139
	I1019 17:15:15.059936  251026 network_create.go:108] docker network embed-certs-090139 192.168.103.0/24 created
	I1019 17:15:15.059973  251026 kic.go:121] calculated static IP "192.168.103.2" for the "embed-certs-090139" container
	I1019 17:15:15.060039  251026 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1019 17:15:15.080111  251026 cli_runner.go:164] Run: docker volume create embed-certs-090139 --label name.minikube.sigs.k8s.io=embed-certs-090139 --label created_by.minikube.sigs.k8s.io=true
	I1019 17:15:15.101182  251026 oci.go:103] Successfully created a docker volume embed-certs-090139
	I1019 17:15:15.101306  251026 cli_runner.go:164] Run: docker run --rm --name embed-certs-090139-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-090139 --entrypoint /usr/bin/test -v embed-certs-090139:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1019 17:15:15.526386  251026 oci.go:107] Successfully prepared a docker volume embed-certs-090139
	I1019 17:15:15.526425  251026 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:15:15.526451  251026 kic.go:194] Starting extracting preloaded images to volume ...
	I1019 17:15:15.526525  251026 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-3731/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-090139:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	W1019 17:15:19.954157  245899 pod_ready.go:104] pod "coredns-66bc5c9577-s4dxw" is not "Ready", error: <nil>
	W1019 17:15:22.457295  245899 pod_ready.go:104] pod "coredns-66bc5c9577-s4dxw" is not "Ready", error: <nil>
	I1019 17:15:19.238127  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:15:19.238618  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1019 17:15:19.238682  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1019 17:15:19.238726  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1019 17:15:19.266311  219832 cri.go:89] found id: "9069a5be85580c182da77b88270f9798850d1b399012f0273ac8911e2aca89dc"
	I1019 17:15:19.266338  219832 cri.go:89] found id: ""
	I1019 17:15:19.266348  219832 logs.go:282] 1 containers: [9069a5be85580c182da77b88270f9798850d1b399012f0273ac8911e2aca89dc]
	I1019 17:15:19.266414  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:15:19.270728  219832 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1019 17:15:19.270790  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1019 17:15:19.298278  219832 cri.go:89] found id: ""
	I1019 17:15:19.298301  219832 logs.go:282] 0 containers: []
	W1019 17:15:19.298309  219832 logs.go:284] No container was found matching "etcd"
	I1019 17:15:19.298314  219832 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1019 17:15:19.298364  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1019 17:15:19.326745  219832 cri.go:89] found id: ""
	I1019 17:15:19.326775  219832 logs.go:282] 0 containers: []
	W1019 17:15:19.326787  219832 logs.go:284] No container was found matching "coredns"
	I1019 17:15:19.326794  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1019 17:15:19.326854  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1019 17:15:19.354905  219832 cri.go:89] found id: "409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:15:19.354926  219832 cri.go:89] found id: ""
	I1019 17:15:19.354933  219832 logs.go:282] 1 containers: [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f]
	I1019 17:15:19.354980  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:15:19.359441  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1019 17:15:19.359508  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1019 17:15:19.389346  219832 cri.go:89] found id: ""
	I1019 17:15:19.389383  219832 logs.go:282] 0 containers: []
	W1019 17:15:19.389395  219832 logs.go:284] No container was found matching "kube-proxy"
	I1019 17:15:19.389403  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1019 17:15:19.389463  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1019 17:15:19.418590  219832 cri.go:89] found id: "fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa"
	I1019 17:15:19.418610  219832 cri.go:89] found id: ""
	I1019 17:15:19.418618  219832 logs.go:282] 1 containers: [fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa]
	I1019 17:15:19.418684  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:15:19.423198  219832 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1019 17:15:19.423269  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1019 17:15:19.452003  219832 cri.go:89] found id: ""
	I1019 17:15:19.452029  219832 logs.go:282] 0 containers: []
	W1019 17:15:19.452040  219832 logs.go:284] No container was found matching "kindnet"
	I1019 17:15:19.452048  219832 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1019 17:15:19.452117  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1019 17:15:19.480534  219832 cri.go:89] found id: ""
	I1019 17:15:19.480570  219832 logs.go:282] 0 containers: []
	W1019 17:15:19.480580  219832 logs.go:284] No container was found matching "storage-provisioner"
	I1019 17:15:19.480592  219832 logs.go:123] Gathering logs for kube-scheduler [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f] ...
	I1019 17:15:19.480608  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:15:19.533388  219832 logs.go:123] Gathering logs for kube-controller-manager [fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa] ...
	I1019 17:15:19.533426  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa"
	I1019 17:15:19.562013  219832 logs.go:123] Gathering logs for CRI-O ...
	I1019 17:15:19.562040  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1019 17:15:19.617533  219832 logs.go:123] Gathering logs for container status ...
	I1019 17:15:19.617569  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1019 17:15:19.649762  219832 logs.go:123] Gathering logs for kubelet ...
	I1019 17:15:19.649788  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1019 17:15:19.738452  219832 logs.go:123] Gathering logs for dmesg ...
	I1019 17:15:19.738490  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1019 17:15:19.754204  219832 logs.go:123] Gathering logs for describe nodes ...
	I1019 17:15:19.754238  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1019 17:15:19.814233  219832 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1019 17:15:19.814252  219832 logs.go:123] Gathering logs for kube-apiserver [9069a5be85580c182da77b88270f9798850d1b399012f0273ac8911e2aca89dc] ...
	I1019 17:15:19.814268  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9069a5be85580c182da77b88270f9798850d1b399012f0273ac8911e2aca89dc"
	I1019 17:15:22.347422  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:15:22.347850  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1019 17:15:22.347905  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1019 17:15:22.347964  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1019 17:15:22.381054  219832 cri.go:89] found id: "9069a5be85580c182da77b88270f9798850d1b399012f0273ac8911e2aca89dc"
	I1019 17:15:22.381097  219832 cri.go:89] found id: ""
	I1019 17:15:22.381113  219832 logs.go:282] 1 containers: [9069a5be85580c182da77b88270f9798850d1b399012f0273ac8911e2aca89dc]
	I1019 17:15:22.381178  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:15:22.386755  219832 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1019 17:15:22.386824  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1019 17:15:22.418110  219832 cri.go:89] found id: ""
	I1019 17:15:22.418133  219832 logs.go:282] 0 containers: []
	W1019 17:15:22.418141  219832 logs.go:284] No container was found matching "etcd"
	I1019 17:15:22.418146  219832 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1019 17:15:22.418198  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1019 17:15:22.449584  219832 cri.go:89] found id: ""
	I1019 17:15:22.449610  219832 logs.go:282] 0 containers: []
	W1019 17:15:22.449619  219832 logs.go:284] No container was found matching "coredns"
	I1019 17:15:22.449627  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1019 17:15:22.449690  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1019 17:15:22.484198  219832 cri.go:89] found id: "409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:15:22.484225  219832 cri.go:89] found id: ""
	I1019 17:15:22.484237  219832 logs.go:282] 1 containers: [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f]
	I1019 17:15:22.484294  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:15:22.489189  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1019 17:15:22.489250  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1019 17:15:22.518428  219832 cri.go:89] found id: ""
	I1019 17:15:22.518454  219832 logs.go:282] 0 containers: []
	W1019 17:15:22.518462  219832 logs.go:284] No container was found matching "kube-proxy"
	I1019 17:15:22.518468  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1019 17:15:22.518521  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1019 17:15:22.552140  219832 cri.go:89] found id: "fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa"
	I1019 17:15:22.552166  219832 cri.go:89] found id: ""
	I1019 17:15:22.552177  219832 logs.go:282] 1 containers: [fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa]
	I1019 17:15:22.552233  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:15:22.556414  219832 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1019 17:15:22.556481  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1019 17:15:22.587591  219832 cri.go:89] found id: ""
	I1019 17:15:22.587619  219832 logs.go:282] 0 containers: []
	W1019 17:15:22.587631  219832 logs.go:284] No container was found matching "kindnet"
	I1019 17:15:22.587639  219832 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1019 17:15:22.587696  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1019 17:15:22.620495  219832 cri.go:89] found id: ""
	I1019 17:15:22.620521  219832 logs.go:282] 0 containers: []
	W1019 17:15:22.620531  219832 logs.go:284] No container was found matching "storage-provisioner"
	I1019 17:15:22.620541  219832 logs.go:123] Gathering logs for describe nodes ...
	I1019 17:15:22.620556  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1019 17:15:22.693954  219832 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1019 17:15:22.693978  219832 logs.go:123] Gathering logs for kube-apiserver [9069a5be85580c182da77b88270f9798850d1b399012f0273ac8911e2aca89dc] ...
	I1019 17:15:22.693993  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9069a5be85580c182da77b88270f9798850d1b399012f0273ac8911e2aca89dc"
	I1019 17:15:22.738876  219832 logs.go:123] Gathering logs for kube-scheduler [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f] ...
	I1019 17:15:22.738919  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:15:22.804166  219832 logs.go:123] Gathering logs for kube-controller-manager [fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa] ...
	I1019 17:15:22.804204  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa"
	I1019 17:15:22.843279  219832 logs.go:123] Gathering logs for CRI-O ...
	I1019 17:15:22.843312  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1019 17:15:22.895536  219832 logs.go:123] Gathering logs for container status ...
	I1019 17:15:22.895573  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1019 17:15:22.929549  219832 logs.go:123] Gathering logs for kubelet ...
	I1019 17:15:22.929589  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1019 17:15:23.045429  219832 logs.go:123] Gathering logs for dmesg ...
	I1019 17:15:23.045466  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1019 17:15:20.085324  251026 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-3731/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-090139:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.558752737s)
	I1019 17:15:20.085364  251026 kic.go:203] duration metric: took 4.558910628s to extract preloaded images to volume ...
	W1019 17:15:20.085467  251026 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1019 17:15:20.085504  251026 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1019 17:15:20.085552  251026 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1019 17:15:20.145764  251026 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-090139 --name embed-certs-090139 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-090139 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-090139 --network embed-certs-090139 --ip 192.168.103.2 --volume embed-certs-090139:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1019 17:15:20.429669  251026 cli_runner.go:164] Run: docker container inspect embed-certs-090139 --format={{.State.Running}}
	I1019 17:15:20.449537  251026 cli_runner.go:164] Run: docker container inspect embed-certs-090139 --format={{.State.Status}}
	I1019 17:15:20.469800  251026 cli_runner.go:164] Run: docker exec embed-certs-090139 stat /var/lib/dpkg/alternatives/iptables
	I1019 17:15:20.516685  251026 oci.go:144] the created container "embed-certs-090139" has a running status.
	I1019 17:15:20.516758  251026 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-3731/.minikube/machines/embed-certs-090139/id_rsa...
	I1019 17:15:20.679677  251026 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-3731/.minikube/machines/embed-certs-090139/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1019 17:15:20.710387  251026 cli_runner.go:164] Run: docker container inspect embed-certs-090139 --format={{.State.Status}}
	I1019 17:15:20.732716  251026 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1019 17:15:20.732743  251026 kic_runner.go:114] Args: [docker exec --privileged embed-certs-090139 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1019 17:15:20.789014  251026 cli_runner.go:164] Run: docker container inspect embed-certs-090139 --format={{.State.Status}}
	I1019 17:15:20.812619  251026 machine.go:94] provisionDockerMachine start ...
	I1019 17:15:20.812722  251026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-090139
	I1019 17:15:20.835671  251026 main.go:143] libmachine: Using SSH client type: native
	I1019 17:15:20.836009  251026 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33074 <nil> <nil>}
	I1019 17:15:20.836032  251026 main.go:143] libmachine: About to run SSH command:
	hostname
	I1019 17:15:20.975557  251026 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-090139
	
	I1019 17:15:20.975597  251026 ubuntu.go:182] provisioning hostname "embed-certs-090139"
	I1019 17:15:20.975672  251026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-090139
	I1019 17:15:20.996125  251026 main.go:143] libmachine: Using SSH client type: native
	I1019 17:15:20.996332  251026 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33074 <nil> <nil>}
	I1019 17:15:20.996347  251026 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-090139 && echo "embed-certs-090139" | sudo tee /etc/hostname
	I1019 17:15:21.145052  251026 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-090139
	
	I1019 17:15:21.145154  251026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-090139
	I1019 17:15:21.165671  251026 main.go:143] libmachine: Using SSH client type: native
	I1019 17:15:21.165930  251026 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33074 <nil> <nil>}
	I1019 17:15:21.165954  251026 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-090139' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-090139/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-090139' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 17:15:21.302201  251026 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1019 17:15:21.302226  251026 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-3731/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-3731/.minikube}
	I1019 17:15:21.302262  251026 ubuntu.go:190] setting up certificates
	I1019 17:15:21.302279  251026 provision.go:84] configureAuth start
	I1019 17:15:21.302350  251026 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-090139
	I1019 17:15:21.321982  251026 provision.go:143] copyHostCerts
	I1019 17:15:21.322050  251026 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-3731/.minikube/ca.pem, removing ...
	I1019 17:15:21.322093  251026 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-3731/.minikube/ca.pem
	I1019 17:15:21.322178  251026 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-3731/.minikube/ca.pem (1082 bytes)
	I1019 17:15:21.322304  251026 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-3731/.minikube/cert.pem, removing ...
	I1019 17:15:21.322317  251026 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-3731/.minikube/cert.pem
	I1019 17:15:21.322359  251026 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-3731/.minikube/cert.pem (1123 bytes)
	I1019 17:15:21.322443  251026 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-3731/.minikube/key.pem, removing ...
	I1019 17:15:21.322454  251026 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-3731/.minikube/key.pem
	I1019 17:15:21.322490  251026 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-3731/.minikube/key.pem (1679 bytes)
	I1019 17:15:21.322562  251026 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-3731/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca-key.pem org=jenkins.embed-certs-090139 san=[127.0.0.1 192.168.103.2 embed-certs-090139 localhost minikube]
	I1019 17:15:21.554655  251026 provision.go:177] copyRemoteCerts
	I1019 17:15:21.554718  251026 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 17:15:21.554756  251026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-090139
	I1019 17:15:21.574049  251026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/embed-certs-090139/id_rsa Username:docker}
	I1019 17:15:21.673466  251026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 17:15:21.694318  251026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1019 17:15:21.715248  251026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 17:15:21.733349  251026 provision.go:87] duration metric: took 431.05374ms to configureAuth
	I1019 17:15:21.733384  251026 ubuntu.go:206] setting minikube options for container-runtime
	I1019 17:15:21.733607  251026 config.go:182] Loaded profile config "embed-certs-090139": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:15:21.733786  251026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-090139
	I1019 17:15:21.752290  251026 main.go:143] libmachine: Using SSH client type: native
	I1019 17:15:21.752494  251026 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33074 <nil> <nil>}
	I1019 17:15:21.752512  251026 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 17:15:22.002937  251026 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 17:15:22.002984  251026 machine.go:97] duration metric: took 1.190326989s to provisionDockerMachine
	I1019 17:15:22.002999  251026 client.go:174] duration metric: took 7.068259055s to LocalClient.Create
	I1019 17:15:22.003029  251026 start.go:167] duration metric: took 7.068331932s to libmachine.API.Create "embed-certs-090139"
	I1019 17:15:22.003046  251026 start.go:293] postStartSetup for "embed-certs-090139" (driver="docker")
	I1019 17:15:22.003062  251026 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 17:15:22.003178  251026 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 17:15:22.003226  251026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-090139
	I1019 17:15:22.022039  251026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/embed-certs-090139/id_rsa Username:docker}
	I1019 17:15:22.123010  251026 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 17:15:22.127283  251026 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 17:15:22.127313  251026 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 17:15:22.127326  251026 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-3731/.minikube/addons for local assets ...
	I1019 17:15:22.127386  251026 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-3731/.minikube/files for local assets ...
	I1019 17:15:22.127483  251026 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-3731/.minikube/files/etc/ssl/certs/72282.pem -> 72282.pem in /etc/ssl/certs
	I1019 17:15:22.127618  251026 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 17:15:22.136014  251026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/files/etc/ssl/certs/72282.pem --> /etc/ssl/certs/72282.pem (1708 bytes)
	I1019 17:15:22.159351  251026 start.go:296] duration metric: took 156.288638ms for postStartSetup
	I1019 17:15:22.159762  251026 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-090139
	I1019 17:15:22.179062  251026 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/embed-certs-090139/config.json ...
	I1019 17:15:22.179438  251026 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 17:15:22.179490  251026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-090139
	I1019 17:15:22.197485  251026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/embed-certs-090139/id_rsa Username:docker}
	I1019 17:15:22.292805  251026 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 17:15:22.298380  251026 start.go:128] duration metric: took 7.366713724s to createHost
	I1019 17:15:22.298409  251026 start.go:83] releasing machines lock for "embed-certs-090139", held for 7.366858572s
	I1019 17:15:22.298485  251026 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-090139
	I1019 17:15:22.318657  251026 ssh_runner.go:195] Run: cat /version.json
	I1019 17:15:22.318724  251026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-090139
	I1019 17:15:22.318743  251026 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 17:15:22.318806  251026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-090139
	I1019 17:15:22.338469  251026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/embed-certs-090139/id_rsa Username:docker}
	I1019 17:15:22.339812  251026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/embed-certs-090139/id_rsa Username:docker}
	I1019 17:15:22.437526  251026 ssh_runner.go:195] Run: systemctl --version
	I1019 17:15:22.505136  251026 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 17:15:22.546609  251026 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 17:15:22.552512  251026 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 17:15:22.552589  251026 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 17:15:22.584303  251026 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1019 17:15:22.584332  251026 start.go:496] detecting cgroup driver to use...
	I1019 17:15:22.584370  251026 detect.go:190] detected "systemd" cgroup driver on host os
	I1019 17:15:22.584420  251026 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 17:15:22.603536  251026 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 17:15:22.621091  251026 docker.go:218] disabling cri-docker service (if available) ...
	I1019 17:15:22.621153  251026 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 17:15:22.640749  251026 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 17:15:22.662635  251026 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 17:15:22.778522  251026 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 17:15:22.878790  251026 docker.go:234] disabling docker service ...
	I1019 17:15:22.878862  251026 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 17:15:22.899357  251026 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 17:15:22.913445  251026 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 17:15:23.018603  251026 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 17:15:23.116655  251026 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 17:15:23.130751  251026 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 17:15:23.146643  251026 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 17:15:23.146715  251026 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:15:23.158103  251026 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1019 17:15:23.158174  251026 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:15:23.168093  251026 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:15:23.179185  251026 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:15:23.189138  251026 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 17:15:23.199467  251026 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:15:23.209497  251026 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:15:23.226167  251026 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:15:23.235582  251026 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 17:15:23.243700  251026 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 17:15:23.251605  251026 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:15:23.337262  251026 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 17:15:23.451851  251026 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 17:15:23.451919  251026 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 17:15:23.456483  251026 start.go:564] Will wait 60s for crictl version
	I1019 17:15:23.456548  251026 ssh_runner.go:195] Run: which crictl
	I1019 17:15:23.460493  251026 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 17:15:23.487160  251026 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 17:15:23.487241  251026 ssh_runner.go:195] Run: crio --version
	I1019 17:15:23.517832  251026 ssh_runner.go:195] Run: crio --version
	I1019 17:15:23.548698  251026 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1019 17:15:23.550011  251026 cli_runner.go:164] Run: docker network inspect embed-certs-090139 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 17:15:23.569122  251026 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1019 17:15:23.573653  251026 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:15:23.586171  251026 kubeadm.go:884] updating cluster {Name:embed-certs-090139 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-090139 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 17:15:23.586307  251026 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:15:23.586359  251026 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:15:23.629543  251026 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:15:23.629577  251026 crio.go:433] Images already preloaded, skipping extraction
	I1019 17:15:23.629637  251026 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:15:23.657766  251026 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:15:23.657789  251026 cache_images.go:86] Images are preloaded, skipping loading
	I1019 17:15:23.657799  251026 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1019 17:15:23.657896  251026 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-090139 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-090139 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 17:15:23.657993  251026 ssh_runner.go:195] Run: crio config
	I1019 17:15:23.711682  251026 cni.go:84] Creating CNI manager for ""
	I1019 17:15:23.711734  251026 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:15:23.711751  251026 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 17:15:23.711771  251026 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-090139 NodeName:embed-certs-090139 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 17:15:23.711903  251026 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-090139"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 17:15:23.711973  251026 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 17:15:23.720816  251026 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 17:15:23.720893  251026 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 17:15:23.729730  251026 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I1019 17:15:23.744357  251026 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 17:15:23.761329  251026 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1019 17:15:23.776293  251026 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1019 17:15:23.780853  251026 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:15:23.793006  251026 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:15:23.879695  251026 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:15:23.915880  251026 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/embed-certs-090139 for IP: 192.168.103.2
	I1019 17:15:23.915902  251026 certs.go:195] generating shared ca certs ...
	I1019 17:15:23.915918  251026 certs.go:227] acquiring lock for ca certs: {Name:mk4c4a2dfd94a54e3626e99fce4b6f5183eeaf4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:15:23.916099  251026 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-3731/.minikube/ca.key
	I1019 17:15:23.916142  251026 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-3731/.minikube/proxy-client-ca.key
	I1019 17:15:23.916151  251026 certs.go:257] generating profile certs ...
	I1019 17:15:23.916210  251026 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/embed-certs-090139/client.key
	I1019 17:15:23.916228  251026 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/embed-certs-090139/client.crt with IP's: []
	I1019 17:15:23.994616  251026 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/embed-certs-090139/client.crt ...
	I1019 17:15:23.994647  251026 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/embed-certs-090139/client.crt: {Name:mk5b0ad2a9e5bc2fcda176fa53af3350d10462e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:15:23.994966  251026 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/embed-certs-090139/client.key ...
	I1019 17:15:23.994991  251026 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/embed-certs-090139/client.key: {Name:mkd86b0c314959b8e88f43d0af08f937c5cbe956 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:15:23.995109  251026 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/embed-certs-090139/apiserver.key.40868374
	I1019 17:15:23.995125  251026 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/embed-certs-090139/apiserver.crt.40868374 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1019 17:15:24.334154  251026 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/embed-certs-090139/apiserver.crt.40868374 ...
	I1019 17:15:24.334184  251026 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/embed-certs-090139/apiserver.crt.40868374: {Name:mk20fe03a8c98b5c6934b8d43bbc7d300f7be4a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:15:24.334345  251026 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/embed-certs-090139/apiserver.key.40868374 ...
	I1019 17:15:24.334358  251026 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/embed-certs-090139/apiserver.key.40868374: {Name:mk71a2678fb5c001b09fa91c9eec993482b058f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:15:24.334427  251026 certs.go:382] copying /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/embed-certs-090139/apiserver.crt.40868374 -> /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/embed-certs-090139/apiserver.crt
	I1019 17:15:24.334502  251026 certs.go:386] copying /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/embed-certs-090139/apiserver.key.40868374 -> /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/embed-certs-090139/apiserver.key
	I1019 17:15:24.334558  251026 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/embed-certs-090139/proxy-client.key
	I1019 17:15:24.334574  251026 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/embed-certs-090139/proxy-client.crt with IP's: []
	I1019 17:15:24.528305  251026 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/embed-certs-090139/proxy-client.crt ...
	I1019 17:15:24.528332  251026 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/embed-certs-090139/proxy-client.crt: {Name:mk0bc06ac6366f3e90200e24d483840c29c7f209 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:15:24.528503  251026 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/embed-certs-090139/proxy-client.key ...
	I1019 17:15:24.528525  251026 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/embed-certs-090139/proxy-client.key: {Name:mk8430ddda2f85e9c315d59524911ccfa2efc9e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:15:24.528754  251026 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/7228.pem (1338 bytes)
	W1019 17:15:24.528806  251026 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-3731/.minikube/certs/7228_empty.pem, impossibly tiny 0 bytes
	I1019 17:15:24.528821  251026 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca-key.pem (1675 bytes)
	I1019 17:15:24.528853  251026 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem (1082 bytes)
	I1019 17:15:24.528889  251026 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/cert.pem (1123 bytes)
	I1019 17:15:24.528923  251026 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/key.pem (1679 bytes)
	I1019 17:15:24.528988  251026 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/files/etc/ssl/certs/72282.pem (1708 bytes)
	I1019 17:15:24.529643  251026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 17:15:24.550584  251026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1019 17:15:24.571620  251026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 17:15:24.593312  251026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1019 17:15:24.612383  251026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/embed-certs-090139/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1019 17:15:24.632776  251026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/embed-certs-090139/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1019 17:15:24.654315  251026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/embed-certs-090139/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 17:15:24.675290  251026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/embed-certs-090139/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1019 17:15:24.697930  251026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 17:15:24.722524  251026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/certs/7228.pem --> /usr/share/ca-certificates/7228.pem (1338 bytes)
	
	
	==> CRI-O <==
	Oct 19 17:14:47 old-k8s-version-904967 crio[569]: time="2025-10-19T17:14:47.489696137Z" level=info msg="Created container 1440d21cef285c712b1fd8cf829a2eb24f00c65d5e80452b50e3a10b8d8f3aa5: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-9tv62/kubernetes-dashboard" id=89c2af69-c0d2-4410-b05c-a716b9f64393 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:14:47 old-k8s-version-904967 crio[569]: time="2025-10-19T17:14:47.490426694Z" level=info msg="Starting container: 1440d21cef285c712b1fd8cf829a2eb24f00c65d5e80452b50e3a10b8d8f3aa5" id=d9be2313-4257-48c0-b5a6-17ecc510374a name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 17:14:47 old-k8s-version-904967 crio[569]: time="2025-10-19T17:14:47.492335274Z" level=info msg="Started container" PID=1723 containerID=1440d21cef285c712b1fd8cf829a2eb24f00c65d5e80452b50e3a10b8d8f3aa5 description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-9tv62/kubernetes-dashboard id=d9be2313-4257-48c0-b5a6-17ecc510374a name=/runtime.v1.RuntimeService/StartContainer sandboxID=625d5e3cb20e2607d17cbff1dd36a168fc565b75040170e8a6f9ecb3ca1b2906
	Oct 19 17:15:00 old-k8s-version-904967 crio[569]: time="2025-10-19T17:15:00.636329974Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=6dd0b37c-ab03-4814-90dc-674840fcded3 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:15:00 old-k8s-version-904967 crio[569]: time="2025-10-19T17:15:00.637516447Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a17c66b1-fea8-4323-9029-21e90f6be831 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:15:00 old-k8s-version-904967 crio[569]: time="2025-10-19T17:15:00.638530484Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=5bb61db0-0354-404b-a90a-5348ae7178cd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:15:00 old-k8s-version-904967 crio[569]: time="2025-10-19T17:15:00.638801851Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:15:00 old-k8s-version-904967 crio[569]: time="2025-10-19T17:15:00.645920248Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:15:00 old-k8s-version-904967 crio[569]: time="2025-10-19T17:15:00.646261466Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/32e8931698d4654f5d77ee3e9f59bddb6653af6c572c95d3d1ef2c4598e8ff6f/merged/etc/passwd: no such file or directory"
	Oct 19 17:15:00 old-k8s-version-904967 crio[569]: time="2025-10-19T17:15:00.646358522Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/32e8931698d4654f5d77ee3e9f59bddb6653af6c572c95d3d1ef2c4598e8ff6f/merged/etc/group: no such file or directory"
	Oct 19 17:15:00 old-k8s-version-904967 crio[569]: time="2025-10-19T17:15:00.646729508Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:15:00 old-k8s-version-904967 crio[569]: time="2025-10-19T17:15:00.685673941Z" level=info msg="Created container 6977bb31ffcd6d22facb3755db7eb620c00759ea8377876599a469f0fa5f01e1: kube-system/storage-provisioner/storage-provisioner" id=5bb61db0-0354-404b-a90a-5348ae7178cd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:15:00 old-k8s-version-904967 crio[569]: time="2025-10-19T17:15:00.686596381Z" level=info msg="Starting container: 6977bb31ffcd6d22facb3755db7eb620c00759ea8377876599a469f0fa5f01e1" id=9838e2cf-9e9d-4681-be50-70ca214bca08 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 17:15:00 old-k8s-version-904967 crio[569]: time="2025-10-19T17:15:00.689555574Z" level=info msg="Started container" PID=1745 containerID=6977bb31ffcd6d22facb3755db7eb620c00759ea8377876599a469f0fa5f01e1 description=kube-system/storage-provisioner/storage-provisioner id=9838e2cf-9e9d-4681-be50-70ca214bca08 name=/runtime.v1.RuntimeService/StartContainer sandboxID=79e6edbfdddc93ebc72ee3704b70e0eb166908a3fbc458f9c6874078b6fd34e4
	Oct 19 17:15:06 old-k8s-version-904967 crio[569]: time="2025-10-19T17:15:06.525959972Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e39600bc-391c-4444-a161-85f80d10e021 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:15:06 old-k8s-version-904967 crio[569]: time="2025-10-19T17:15:06.526939368Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=ea4bd173-c7ca-4aed-b615-31b2e7a226a2 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:15:06 old-k8s-version-904967 crio[569]: time="2025-10-19T17:15:06.528212918Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fw6d/dashboard-metrics-scraper" id=7fddbc13-cb7a-48fb-a1a9-ae06105acde2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:15:06 old-k8s-version-904967 crio[569]: time="2025-10-19T17:15:06.528479714Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:15:06 old-k8s-version-904967 crio[569]: time="2025-10-19T17:15:06.535166557Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:15:06 old-k8s-version-904967 crio[569]: time="2025-10-19T17:15:06.535885246Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:15:06 old-k8s-version-904967 crio[569]: time="2025-10-19T17:15:06.567244409Z" level=info msg="Created container d09ce49842899f8553d55483ba7991569651a6a48f0c338ad78e1055a5625a3d: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fw6d/dashboard-metrics-scraper" id=7fddbc13-cb7a-48fb-a1a9-ae06105acde2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:15:06 old-k8s-version-904967 crio[569]: time="2025-10-19T17:15:06.568531791Z" level=info msg="Starting container: d09ce49842899f8553d55483ba7991569651a6a48f0c338ad78e1055a5625a3d" id=b04dbbde-6799-4030-858c-3d705eab3b81 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 17:15:06 old-k8s-version-904967 crio[569]: time="2025-10-19T17:15:06.570785877Z" level=info msg="Started container" PID=1760 containerID=d09ce49842899f8553d55483ba7991569651a6a48f0c338ad78e1055a5625a3d description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fw6d/dashboard-metrics-scraper id=b04dbbde-6799-4030-858c-3d705eab3b81 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6c0423b95b0fde5c11431d46f3cc6d12d059a92672688085c4fac04c100a4fe4
	Oct 19 17:15:06 old-k8s-version-904967 crio[569]: time="2025-10-19T17:15:06.661138428Z" level=info msg="Removing container: c3da2a5bbdfc05533b919ca0a4ed929aaa6e3a4bb594c8189a7418a495b5b529" id=5d92c074-c31d-4926-8aaa-a67df0ad8c95 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 17:15:06 old-k8s-version-904967 crio[569]: time="2025-10-19T17:15:06.672236636Z" level=info msg="Removed container c3da2a5bbdfc05533b919ca0a4ed929aaa6e3a4bb594c8189a7418a495b5b529: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fw6d/dashboard-metrics-scraper" id=5d92c074-c31d-4926-8aaa-a67df0ad8c95 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	d09ce49842899       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           19 seconds ago      Exited              dashboard-metrics-scraper   2                   6c0423b95b0fd       dashboard-metrics-scraper-5f989dc9cf-7fw6d       kubernetes-dashboard
	6977bb31ffcd6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           24 seconds ago      Running             storage-provisioner         1                   79e6edbfdddc9       storage-provisioner                              kube-system
	1440d21cef285       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   38 seconds ago      Running             kubernetes-dashboard        0                   625d5e3cb20e2       kubernetes-dashboard-8694d4445c-9tv62            kubernetes-dashboard
	b30c1cf139693       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           55 seconds ago      Running             busybox                     1                   10bed606fc324       busybox                                          default
	ce94b6419b2c4       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           55 seconds ago      Running             coredns                     0                   40731cdf61bdd       coredns-5dd5756b68-qdvcm                         kube-system
	f5b580c231276       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           55 seconds ago      Exited              storage-provisioner         0                   79e6edbfdddc9       storage-provisioner                              kube-system
	55c6a978b088c       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           55 seconds ago      Running             kube-proxy                  0                   aafb0a20b0a84       kube-proxy-gr6m9                                 kube-system
	1cb477f3e2b8b       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           55 seconds ago      Running             kindnet-cni                 0                   01776bd797f2e       kindnet-lh8rm                                    kube-system
	d585a77a4eff3       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           58 seconds ago      Running             kube-scheduler              0                   9aac900289460       kube-scheduler-old-k8s-version-904967            kube-system
	f8fee443a165e       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           58 seconds ago      Running             etcd                        0                   c29781f8a9392       etcd-old-k8s-version-904967                      kube-system
	78ff50c78f7cc       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           58 seconds ago      Running             kube-controller-manager     0                   9ab2d8754db1e       kube-controller-manager-old-k8s-version-904967   kube-system
	783eeba3fb702       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           58 seconds ago      Running             kube-apiserver              0                   b058c33a5ecb8       kube-apiserver-old-k8s-version-904967            kube-system
	
	
	==> coredns [ce94b6419b2c4ac1db095de413bd1d82939921cfe884c2239c2cb800683b9fc5] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:55707 - 45089 "HINFO IN 8344470819792762176.7498521989286999521. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.468137845s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-904967
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-904967
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34
	                    minikube.k8s.io/name=old-k8s-version-904967
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T17_13_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 17:13:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-904967
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 17:15:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 17:14:59 +0000   Sun, 19 Oct 2025 17:13:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 17:14:59 +0000   Sun, 19 Oct 2025 17:13:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 17:14:59 +0000   Sun, 19 Oct 2025 17:13:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 17:14:59 +0000   Sun, 19 Oct 2025 17:13:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-904967
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                1f7bd5b5-08c8-4ce1-be37-64fa8f96d211
	  Boot ID:                    74c6737e-3c93-45ee-b810-4373d2d70b9b
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-5dd5756b68-qdvcm                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     111s
	  kube-system                 etcd-old-k8s-version-904967                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m4s
	  kube-system                 kindnet-lh8rm                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-old-k8s-version-904967             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-controller-manager-old-k8s-version-904967    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-proxy-gr6m9                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-old-k8s-version-904967             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-7fw6d        0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-9tv62             0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 110s               kube-proxy       
	  Normal  Starting                 55s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m4s               kubelet          Node old-k8s-version-904967 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m4s               kubelet          Node old-k8s-version-904967 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m4s               kubelet          Node old-k8s-version-904967 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m4s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           111s               node-controller  Node old-k8s-version-904967 event: Registered Node old-k8s-version-904967 in Controller
	  Normal  NodeReady                97s                kubelet          Node old-k8s-version-904967 status is now: NodeReady
	  Normal  Starting                 59s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s (x8 over 59s)  kubelet          Node old-k8s-version-904967 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x8 over 59s)  kubelet          Node old-k8s-version-904967 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x8 over 59s)  kubelet          Node old-k8s-version-904967 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           44s                node-controller  Node old-k8s-version-904967 event: Registered Node old-k8s-version-904967 in Controller
	
	
	==> dmesg <==
	[  +0.102617] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027869] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.458770] kauditd_printk_skb: 47 callbacks suppressed
	[Oct19 16:23] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.054620] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023908] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023878] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023848] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023943] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +2.047764] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +4.031517] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +8.127172] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[ +16.382276] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[Oct19 16:24] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	
	
	==> etcd [f8fee443a165e8c94dbca458d7be0af55ddfb347583a529bb18135d08cf99cda] <==
	{"level":"info","ts":"2025-10-19T17:14:27.094467Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-19T17:14:27.09543Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-10-19T17:14:27.096028Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-10-19T17:14:27.096235Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-19T17:14:27.096274Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-19T17:14:27.098119Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-19T17:14:27.098622Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-19T17:14:27.098652Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-19T17:14:27.098696Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-19T17:14:27.098706Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-19T17:14:28.087324Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-19T17:14:28.087374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-19T17:14:28.087422Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-19T17:14:28.087442Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-10-19T17:14:28.087449Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-19T17:14:28.087461Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-10-19T17:14:28.087473Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-19T17:14:28.089035Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-19T17:14:28.089032Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-904967 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-19T17:14:28.089099Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-19T17:14:28.089327Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-19T17:14:28.089384Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-19T17:14:28.091098Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-10-19T17:14:28.091488Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-19T17:15:20.043984Z","caller":"traceutil/trace.go:171","msg":"trace[1162046623] transaction","detail":"{read_only:false; response_revision:633; number_of_response:1; }","duration":"146.090649ms","start":"2025-10-19T17:15:19.897863Z","end":"2025-10-19T17:15:20.043954Z","steps":["trace[1162046623] 'process raft request'  (duration: 145.939967ms)"],"step_count":1}
	
	
	==> kernel <==
	 17:15:25 up 57 min,  0 user,  load average: 3.55, 2.84, 1.71
	Linux old-k8s-version-904967 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1cb477f3e2b8baf572ed7209b429278d823d78e9b46164608b3a173129ae017e] <==
	I1019 17:14:30.156177       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 17:14:30.156566       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1019 17:14:30.156751       1 main.go:148] setting mtu 1500 for CNI 
	I1019 17:14:30.156782       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 17:14:30.156811       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T17:14:30Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 17:14:30.453656       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 17:14:30.453754       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 17:14:30.453767       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 17:14:30.454249       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1019 17:14:30.850323       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 17:14:30.850358       1 metrics.go:72] Registering metrics
	I1019 17:14:30.850435       1 controller.go:711] "Syncing nftables rules"
	I1019 17:14:40.459679       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 17:14:40.459741       1 main.go:301] handling current node
	I1019 17:14:50.453653       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 17:14:50.453697       1 main.go:301] handling current node
	I1019 17:15:00.454419       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 17:15:00.454542       1 main.go:301] handling current node
	I1019 17:15:10.454190       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 17:15:10.454236       1 main.go:301] handling current node
	I1019 17:15:20.460176       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 17:15:20.460217       1 main.go:301] handling current node
	
	
	==> kube-apiserver [783eeba3fb702b2ab824254b8901f2f139f59ef0c6c596fed9712ff31faef63f] <==
	I1019 17:14:29.135155       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1019 17:14:29.142758       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1019 17:14:29.142774       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1019 17:14:29.142790       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1019 17:14:29.142885       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1019 17:14:29.142970       1 aggregator.go:166] initial CRD sync complete...
	I1019 17:14:29.142980       1 autoregister_controller.go:141] Starting autoregister controller
	I1019 17:14:29.142993       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1019 17:14:29.143000       1 cache.go:39] Caches are synced for autoregister controller
	I1019 17:14:29.143389       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1019 17:14:29.145004       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1019 17:14:29.155409       1 shared_informer.go:318] Caches are synced for configmaps
	I1019 17:14:30.045144       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 17:14:30.213109       1 controller.go:624] quota admission added evaluator for: namespaces
	I1019 17:14:30.248462       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1019 17:14:30.275469       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 17:14:30.285240       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 17:14:30.293036       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1019 17:14:30.331367       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.36.65"}
	I1019 17:14:30.346681       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.74.254"}
	I1019 17:14:41.681377       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 17:14:41.681419       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 17:14:41.732275       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1019 17:14:41.782767       1 controller.go:624] quota admission added evaluator for: endpoints
	I1019 17:14:41.782767       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [78ff50c78f7cce6ccee8c1e7478bfa6937ce35b306cb412c85a9d2a83a64face] <==
	I1019 17:14:41.635829       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="120.261µs"
	I1019 17:14:41.736935       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1019 17:14:41.737007       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1019 17:14:41.744502       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-9tv62"
	I1019 17:14:41.745544       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-7fw6d"
	I1019 17:14:41.756211       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="19.597133ms"
	I1019 17:14:41.756427       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="19.849265ms"
	I1019 17:14:41.763648       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="7.383429ms"
	I1019 17:14:41.763735       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="48.009µs"
	I1019 17:14:41.763651       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="7.173678ms"
	I1019 17:14:41.763783       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="26.033µs"
	I1019 17:14:41.770030       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="94.12µs"
	I1019 17:14:41.779999       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="85.96µs"
	I1019 17:14:41.828726       1 shared_informer.go:318] Caches are synced for garbage collector
	I1019 17:14:41.880982       1 shared_informer.go:318] Caches are synced for garbage collector
	I1019 17:14:41.881011       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1019 17:14:44.597795       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="113.873µs"
	I1019 17:14:45.603921       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="126.675µs"
	I1019 17:14:46.620466       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="85µs"
	I1019 17:14:47.617258       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="7.208632ms"
	I1019 17:14:47.617559       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="91.02µs"
	I1019 17:15:06.677694       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="78.903µs"
	I1019 17:15:09.033605       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="8.504026ms"
	I1019 17:15:09.033775       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="100.443µs"
	I1019 17:15:12.089648       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="118.765µs"
	
	
	==> kube-proxy [55c6a978b088cdf7358bab39ddcabd75fc5780747290a484f984a56f7a86398c] <==
	I1019 17:14:29.967681       1 server_others.go:69] "Using iptables proxy"
	I1019 17:14:29.977442       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1019 17:14:29.995981       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 17:14:29.998518       1 server_others.go:152] "Using iptables Proxier"
	I1019 17:14:29.998561       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1019 17:14:29.998570       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1019 17:14:29.998613       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1019 17:14:29.998866       1 server.go:846] "Version info" version="v1.28.0"
	I1019 17:14:29.998889       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:14:29.999671       1 config.go:315] "Starting node config controller"
	I1019 17:14:29.999736       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1019 17:14:29.999844       1 config.go:188] "Starting service config controller"
	I1019 17:14:29.999884       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1019 17:14:29.999967       1 config.go:97] "Starting endpoint slice config controller"
	I1019 17:14:29.999974       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1019 17:14:30.099918       1 shared_informer.go:318] Caches are synced for node config
	I1019 17:14:30.101012       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1019 17:14:30.101025       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [d585a77a4eff398d568fbaf843dc59dc0a8f11ceece1172b1b6499be37a6bc8c] <==
	I1019 17:14:27.652494       1 serving.go:348] Generated self-signed cert in-memory
	W1019 17:14:29.052545       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1019 17:14:29.056132       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1019 17:14:29.056163       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1019 17:14:29.056173       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1019 17:14:29.094016       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1019 17:14:29.095464       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:14:29.098137       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 17:14:29.098190       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1019 17:14:29.098848       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1019 17:14:29.099005       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1019 17:14:29.198367       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 19 17:14:41 old-k8s-version-904967 kubelet[730]: I1019 17:14:41.870947     730 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/bf42ac24-dcdc-400d-a17f-b022ff5102f1-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-9tv62\" (UID: \"bf42ac24-dcdc-400d-a17f-b022ff5102f1\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-9tv62"
	Oct 19 17:14:41 old-k8s-version-904967 kubelet[730]: I1019 17:14:41.871003     730 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2b82\" (UniqueName: \"kubernetes.io/projected/93067392-e0af-4f62-9b02-cbe31f8c0617-kube-api-access-x2b82\") pod \"dashboard-metrics-scraper-5f989dc9cf-7fw6d\" (UID: \"93067392-e0af-4f62-9b02-cbe31f8c0617\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fw6d"
	Oct 19 17:14:41 old-k8s-version-904967 kubelet[730]: I1019 17:14:41.871028     730 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnztp\" (UniqueName: \"kubernetes.io/projected/bf42ac24-dcdc-400d-a17f-b022ff5102f1-kube-api-access-hnztp\") pod \"kubernetes-dashboard-8694d4445c-9tv62\" (UID: \"bf42ac24-dcdc-400d-a17f-b022ff5102f1\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-9tv62"
	Oct 19 17:14:41 old-k8s-version-904967 kubelet[730]: I1019 17:14:41.871048     730 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/93067392-e0af-4f62-9b02-cbe31f8c0617-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-7fw6d\" (UID: \"93067392-e0af-4f62-9b02-cbe31f8c0617\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fw6d"
	Oct 19 17:14:44 old-k8s-version-904967 kubelet[730]: I1019 17:14:44.585943     730 scope.go:117] "RemoveContainer" containerID="84eff4894eed7f2967cdf92e4d59963de9c70d8d75b7be73cc32bff3b3f5d867"
	Oct 19 17:14:45 old-k8s-version-904967 kubelet[730]: I1019 17:14:45.590229     730 scope.go:117] "RemoveContainer" containerID="84eff4894eed7f2967cdf92e4d59963de9c70d8d75b7be73cc32bff3b3f5d867"
	Oct 19 17:14:45 old-k8s-version-904967 kubelet[730]: I1019 17:14:45.590415     730 scope.go:117] "RemoveContainer" containerID="c3da2a5bbdfc05533b919ca0a4ed929aaa6e3a4bb594c8189a7418a495b5b529"
	Oct 19 17:14:45 old-k8s-version-904967 kubelet[730]: E1019 17:14:45.590810     730 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-7fw6d_kubernetes-dashboard(93067392-e0af-4f62-9b02-cbe31f8c0617)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fw6d" podUID="93067392-e0af-4f62-9b02-cbe31f8c0617"
	Oct 19 17:14:46 old-k8s-version-904967 kubelet[730]: I1019 17:14:46.594587     730 scope.go:117] "RemoveContainer" containerID="c3da2a5bbdfc05533b919ca0a4ed929aaa6e3a4bb594c8189a7418a495b5b529"
	Oct 19 17:14:46 old-k8s-version-904967 kubelet[730]: E1019 17:14:46.595004     730 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-7fw6d_kubernetes-dashboard(93067392-e0af-4f62-9b02-cbe31f8c0617)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fw6d" podUID="93067392-e0af-4f62-9b02-cbe31f8c0617"
	Oct 19 17:14:47 old-k8s-version-904967 kubelet[730]: I1019 17:14:47.610537     730 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-9tv62" podStartSLOduration=1.2342846729999999 podCreationTimestamp="2025-10-19 17:14:41 +0000 UTC" firstStartedPulling="2025-10-19 17:14:42.077376343 +0000 UTC m=+15.649483445" lastFinishedPulling="2025-10-19 17:14:47.453558319 +0000 UTC m=+21.025665424" observedRunningTime="2025-10-19 17:14:47.609796966 +0000 UTC m=+21.181904073" watchObservedRunningTime="2025-10-19 17:14:47.610466652 +0000 UTC m=+21.182573763"
	Oct 19 17:14:52 old-k8s-version-904967 kubelet[730]: I1019 17:14:52.055448     730 scope.go:117] "RemoveContainer" containerID="c3da2a5bbdfc05533b919ca0a4ed929aaa6e3a4bb594c8189a7418a495b5b529"
	Oct 19 17:14:52 old-k8s-version-904967 kubelet[730]: E1019 17:14:52.055717     730 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-7fw6d_kubernetes-dashboard(93067392-e0af-4f62-9b02-cbe31f8c0617)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fw6d" podUID="93067392-e0af-4f62-9b02-cbe31f8c0617"
	Oct 19 17:15:00 old-k8s-version-904967 kubelet[730]: I1019 17:15:00.635678     730 scope.go:117] "RemoveContainer" containerID="f5b580c231276ddf60d434e3d348c303152e46cc277722125030d8e76cb3335e"
	Oct 19 17:15:06 old-k8s-version-904967 kubelet[730]: I1019 17:15:06.525251     730 scope.go:117] "RemoveContainer" containerID="c3da2a5bbdfc05533b919ca0a4ed929aaa6e3a4bb594c8189a7418a495b5b529"
	Oct 19 17:15:06 old-k8s-version-904967 kubelet[730]: I1019 17:15:06.659056     730 scope.go:117] "RemoveContainer" containerID="c3da2a5bbdfc05533b919ca0a4ed929aaa6e3a4bb594c8189a7418a495b5b529"
	Oct 19 17:15:06 old-k8s-version-904967 kubelet[730]: I1019 17:15:06.659379     730 scope.go:117] "RemoveContainer" containerID="d09ce49842899f8553d55483ba7991569651a6a48f0c338ad78e1055a5625a3d"
	Oct 19 17:15:06 old-k8s-version-904967 kubelet[730]: E1019 17:15:06.659840     730 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-7fw6d_kubernetes-dashboard(93067392-e0af-4f62-9b02-cbe31f8c0617)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fw6d" podUID="93067392-e0af-4f62-9b02-cbe31f8c0617"
	Oct 19 17:15:12 old-k8s-version-904967 kubelet[730]: I1019 17:15:12.055515     730 scope.go:117] "RemoveContainer" containerID="d09ce49842899f8553d55483ba7991569651a6a48f0c338ad78e1055a5625a3d"
	Oct 19 17:15:12 old-k8s-version-904967 kubelet[730]: E1019 17:15:12.056410     730 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-7fw6d_kubernetes-dashboard(93067392-e0af-4f62-9b02-cbe31f8c0617)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fw6d" podUID="93067392-e0af-4f62-9b02-cbe31f8c0617"
	Oct 19 17:15:23 old-k8s-version-904967 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 19 17:15:23 old-k8s-version-904967 kubelet[730]: I1019 17:15:23.131348     730 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 19 17:15:23 old-k8s-version-904967 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 19 17:15:23 old-k8s-version-904967 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 19 17:15:23 old-k8s-version-904967 systemd[1]: kubelet.service: Consumed 1.716s CPU time.
	
	
	==> kubernetes-dashboard [1440d21cef285c712b1fd8cf829a2eb24f00c65d5e80452b50e3a10b8d8f3aa5] <==
	2025/10/19 17:14:47 Starting overwatch
	2025/10/19 17:14:47 Using namespace: kubernetes-dashboard
	2025/10/19 17:14:47 Using in-cluster config to connect to apiserver
	2025/10/19 17:14:47 Using secret token for csrf signing
	2025/10/19 17:14:47 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/19 17:14:47 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/19 17:14:47 Successful initial request to the apiserver, version: v1.28.0
	2025/10/19 17:14:47 Generating JWE encryption key
	2025/10/19 17:14:47 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/19 17:14:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/19 17:14:47 Initializing JWE encryption key from synchronized object
	2025/10/19 17:14:47 Creating in-cluster Sidecar client
	2025/10/19 17:14:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/19 17:14:47 Serving insecurely on HTTP port: 9090
	2025/10/19 17:15:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [6977bb31ffcd6d22facb3755db7eb620c00759ea8377876599a469f0fa5f01e1] <==
	I1019 17:15:00.706598       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1019 17:15:00.720440       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1019 17:15:00.720531       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1019 17:15:18.212207       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1019 17:15:18.212379       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-904967_61b7fb93-038d-44f9-9998-64c93137ba96!
	I1019 17:15:18.212345       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1eb69d94-a491-4ab8-b2b9-5d7636ed3c57", APIVersion:"v1", ResourceVersion:"631", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-904967_61b7fb93-038d-44f9-9998-64c93137ba96 became leader
	I1019 17:15:18.313242       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-904967_61b7fb93-038d-44f9-9998-64c93137ba96!
	
	
	==> storage-provisioner [f5b580c231276ddf60d434e3d348c303152e46cc277722125030d8e76cb3335e] <==
	I1019 17:14:29.934273       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1019 17:14:59.936585       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-904967 -n old-k8s-version-904967
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-904967 -n old-k8s-version-904967: exit status 2 (332.51462ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-904967 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-904967
helpers_test.go:243: (dbg) docker inspect old-k8s-version-904967:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c0f82ef529f88e13692e76592a85ee8e2eb1cb4f227f77088f9bbbb0466db719",
	        "Created": "2025-10-19T17:13:07.590891639Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 241328,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T17:14:20.219769719Z",
	            "FinishedAt": "2025-10-19T17:14:19.276211912Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/c0f82ef529f88e13692e76592a85ee8e2eb1cb4f227f77088f9bbbb0466db719/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c0f82ef529f88e13692e76592a85ee8e2eb1cb4f227f77088f9bbbb0466db719/hostname",
	        "HostsPath": "/var/lib/docker/containers/c0f82ef529f88e13692e76592a85ee8e2eb1cb4f227f77088f9bbbb0466db719/hosts",
	        "LogPath": "/var/lib/docker/containers/c0f82ef529f88e13692e76592a85ee8e2eb1cb4f227f77088f9bbbb0466db719/c0f82ef529f88e13692e76592a85ee8e2eb1cb4f227f77088f9bbbb0466db719-json.log",
	        "Name": "/old-k8s-version-904967",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-904967:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-904967",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c0f82ef529f88e13692e76592a85ee8e2eb1cb4f227f77088f9bbbb0466db719",
	                "LowerDir": "/var/lib/docker/overlay2/305a170662898a69b3b459b30af2aee1e923f246f5b4b75beb501c15e4bfc402-init/diff:/var/lib/docker/overlay2/0516a6de822f96a429e000bb6523437ca93a66a6aecaba561c6abf45d0d1defe/diff",
	                "MergedDir": "/var/lib/docker/overlay2/305a170662898a69b3b459b30af2aee1e923f246f5b4b75beb501c15e4bfc402/merged",
	                "UpperDir": "/var/lib/docker/overlay2/305a170662898a69b3b459b30af2aee1e923f246f5b4b75beb501c15e4bfc402/diff",
	                "WorkDir": "/var/lib/docker/overlay2/305a170662898a69b3b459b30af2aee1e923f246f5b4b75beb501c15e4bfc402/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-904967",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-904967/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-904967",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-904967",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-904967",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "02522498c863643b1b09524efd2bf17f7a439b7df3897cb9cded8bdf4c7afe32",
	            "SandboxKey": "/var/run/docker/netns/02522498c863",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-904967": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:a9:14:07:e7:3f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cfc938debfe6129c7f9048a0a383817f7b1fb100af5f0af7c3b32f6517e76495",
	                    "EndpointID": "d529790b30aabcd542e1f896aaa4a643570519c05c29343bbe41c310fa0142ef",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-904967",
	                        "c0f82ef529f8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-904967 -n old-k8s-version-904967
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-904967 -n old-k8s-version-904967: exit status 2 (332.088702ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-904967 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-904967 logs -n 25: (1.192215269s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ cert-options-639932 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-639932       │ jenkins │ v1.37.0 │ 19 Oct 25 17:12 UTC │ 19 Oct 25 17:12 UTC │
	│ ssh     │ -p cert-options-639932 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-639932       │ jenkins │ v1.37.0 │ 19 Oct 25 17:12 UTC │ 19 Oct 25 17:12 UTC │
	│ delete  │ -p cert-options-639932                                                                                                                                                                                                                        │ cert-options-639932       │ jenkins │ v1.37.0 │ 19 Oct 25 17:12 UTC │ 19 Oct 25 17:12 UTC │
	│ start   │ -p kubernetes-upgrade-318879 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-318879 │ jenkins │ v1.37.0 │ 19 Oct 25 17:12 UTC │                     │
	│ start   │ -p missing-upgrade-447724 --memory=3072 --driver=docker  --container-runtime=crio                                                                                                                                                             │ missing-upgrade-447724    │ jenkins │ v1.32.0 │ 19 Oct 25 17:12 UTC │ 19 Oct 25 17:12 UTC │
	│ stop    │ stopped-upgrade-659566 stop                                                                                                                                                                                                                   │ stopped-upgrade-659566    │ jenkins │ v1.32.0 │ 19 Oct 25 17:12 UTC │ 19 Oct 25 17:12 UTC │
	│ start   │ -p stopped-upgrade-659566 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                      │ stopped-upgrade-659566    │ jenkins │ v1.37.0 │ 19 Oct 25 17:12 UTC │ 19 Oct 25 17:12 UTC │
	│ start   │ -p missing-upgrade-447724 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                      │ missing-upgrade-447724    │ jenkins │ v1.37.0 │ 19 Oct 25 17:12 UTC │ 19 Oct 25 17:13 UTC │
	│ delete  │ -p stopped-upgrade-659566                                                                                                                                                                                                                     │ stopped-upgrade-659566    │ jenkins │ v1.37.0 │ 19 Oct 25 17:12 UTC │ 19 Oct 25 17:13 UTC │
	│ start   │ -p old-k8s-version-904967 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-904967    │ jenkins │ v1.37.0 │ 19 Oct 25 17:13 UTC │ 19 Oct 25 17:13 UTC │
	│ delete  │ -p missing-upgrade-447724                                                                                                                                                                                                                     │ missing-upgrade-447724    │ jenkins │ v1.37.0 │ 19 Oct 25 17:13 UTC │ 19 Oct 25 17:13 UTC │
	│ start   │ -p no-preload-806996 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-806996         │ jenkins │ v1.37.0 │ 19 Oct 25 17:13 UTC │ 19 Oct 25 17:14 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-904967 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-904967    │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │                     │
	│ stop    │ -p old-k8s-version-904967 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-904967    │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │ 19 Oct 25 17:14 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-904967 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-904967    │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │ 19 Oct 25 17:14 UTC │
	│ start   │ -p old-k8s-version-904967 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-904967    │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │ 19 Oct 25 17:15 UTC │
	│ addons  │ enable metrics-server -p no-preload-806996 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-806996         │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │                     │
	│ stop    │ -p no-preload-806996 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-806996         │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │ 19 Oct 25 17:14 UTC │
	│ addons  │ enable dashboard -p no-preload-806996 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-806996         │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │ 19 Oct 25 17:14 UTC │
	│ start   │ -p no-preload-806996 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-806996         │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │                     │
	│ start   │ -p cert-expiration-132648 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-132648    │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ delete  │ -p cert-expiration-132648                                                                                                                                                                                                                     │ cert-expiration-132648    │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ start   │ -p embed-certs-090139 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-090139        │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │                     │
	│ image   │ old-k8s-version-904967 image list --format=json                                                                                                                                                                                               │ old-k8s-version-904967    │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ pause   │ -p old-k8s-version-904967 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-904967    │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 17:15:14
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 17:15:14.733521  251026 out.go:360] Setting OutFile to fd 1 ...
	I1019 17:15:14.733767  251026 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:15:14.733775  251026 out.go:374] Setting ErrFile to fd 2...
	I1019 17:15:14.733779  251026 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:15:14.733976  251026 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3731/.minikube/bin
	I1019 17:15:14.734500  251026 out.go:368] Setting JSON to false
	I1019 17:15:14.735721  251026 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3461,"bootTime":1760890654,"procs":347,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 17:15:14.735830  251026 start.go:143] virtualization: kvm guest
	I1019 17:15:14.737727  251026 out.go:179] * [embed-certs-090139] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 17:15:14.739393  251026 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 17:15:14.739387  251026 notify.go:221] Checking for updates...
	I1019 17:15:14.741689  251026 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 17:15:14.743334  251026 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-3731/kubeconfig
	I1019 17:15:14.744674  251026 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-3731/.minikube
	I1019 17:15:14.746044  251026 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 17:15:14.747304  251026 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 17:15:14.749177  251026 config.go:182] Loaded profile config "kubernetes-upgrade-318879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:15:14.749329  251026 config.go:182] Loaded profile config "no-preload-806996": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:15:14.749441  251026 config.go:182] Loaded profile config "old-k8s-version-904967": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1019 17:15:14.749574  251026 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 17:15:14.777339  251026 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1019 17:15:14.777418  251026 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:15:14.838046  251026 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-19 17:15:14.82692179 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 17:15:14.838210  251026 docker.go:319] overlay module found
	I1019 17:15:14.840575  251026 out.go:179] * Using the docker driver based on user configuration
	I1019 17:15:14.841801  251026 start.go:309] selected driver: docker
	I1019 17:15:14.841824  251026 start.go:930] validating driver "docker" against <nil>
	I1019 17:15:14.841840  251026 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 17:15:14.842508  251026 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:15:14.900903  251026 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-19 17:15:14.890943329 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 17:15:14.901115  251026 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1019 17:15:14.901335  251026 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 17:15:14.902903  251026 out.go:179] * Using Docker driver with root privileges
	I1019 17:15:14.903978  251026 cni.go:84] Creating CNI manager for ""
	I1019 17:15:14.904051  251026 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:15:14.904063  251026 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1019 17:15:14.904176  251026 start.go:353] cluster config:
	{Name:embed-certs-090139 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-090139 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:15:14.905605  251026 out.go:179] * Starting "embed-certs-090139" primary control-plane node in "embed-certs-090139" cluster
	I1019 17:15:14.906705  251026 cache.go:124] Beginning downloading kic base image for docker with crio
	I1019 17:15:14.907962  251026 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 17:15:14.909204  251026 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:15:14.909249  251026 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 17:15:14.909258  251026 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-3731/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1019 17:15:14.909282  251026 cache.go:59] Caching tarball of preloaded images
	I1019 17:15:14.909400  251026 preload.go:233] Found /home/jenkins/minikube-integration/21683-3731/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1019 17:15:14.909414  251026 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 17:15:14.909531  251026 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/embed-certs-090139/config.json ...
	I1019 17:15:14.909558  251026 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/embed-certs-090139/config.json: {Name:mkfc3a621c5d880f4560cce53d1586fc6ace20b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:15:14.931329  251026 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 17:15:14.931353  251026 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 17:15:14.931370  251026 cache.go:233] Successfully downloaded all kic artifacts
	I1019 17:15:14.931398  251026 start.go:360] acquireMachinesLock for embed-certs-090139: {Name:mkdaa028ca10b90b55fac4626a0f749931b30e56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 17:15:14.931532  251026 start.go:364] duration metric: took 110.563µs to acquireMachinesLock for "embed-certs-090139"
	I1019 17:15:14.931569  251026 start.go:93] Provisioning new machine with config: &{Name:embed-certs-090139 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-090139 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 17:15:14.931649  251026 start.go:125] createHost starting for "" (driver="docker")
	W1019 17:15:14.954538  245899 pod_ready.go:104] pod "coredns-66bc5c9577-s4dxw" is not "Ready", error: <nil>
	W1019 17:15:17.454802  245899 pod_ready.go:104] pod "coredns-66bc5c9577-s4dxw" is not "Ready", error: <nil>
	I1019 17:15:13.559052  219832 logs.go:123] Gathering logs for kube-controller-manager [fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa] ...
	I1019 17:15:13.559104  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa"
	I1019 17:15:16.089164  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:15:16.089656  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1019 17:15:16.089721  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1019 17:15:16.089779  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1019 17:15:16.123334  219832 cri.go:89] found id: "9069a5be85580c182da77b88270f9798850d1b399012f0273ac8911e2aca89dc"
	I1019 17:15:16.123361  219832 cri.go:89] found id: ""
	I1019 17:15:16.123372  219832 logs.go:282] 1 containers: [9069a5be85580c182da77b88270f9798850d1b399012f0273ac8911e2aca89dc]
	I1019 17:15:16.123429  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:15:16.127987  219832 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1019 17:15:16.128093  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1019 17:15:16.158797  219832 cri.go:89] found id: ""
	I1019 17:15:16.158824  219832 logs.go:282] 0 containers: []
	W1019 17:15:16.158835  219832 logs.go:284] No container was found matching "etcd"
	I1019 17:15:16.158846  219832 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1019 17:15:16.158910  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1019 17:15:16.187518  219832 cri.go:89] found id: ""
	I1019 17:15:16.187541  219832 logs.go:282] 0 containers: []
	W1019 17:15:16.187550  219832 logs.go:284] No container was found matching "coredns"
	I1019 17:15:16.187556  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1019 17:15:16.187613  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1019 17:15:16.215732  219832 cri.go:89] found id: "409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:15:16.215752  219832 cri.go:89] found id: ""
	I1019 17:15:16.215760  219832 logs.go:282] 1 containers: [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f]
	I1019 17:15:16.215815  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:15:16.219901  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1019 17:15:16.219960  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1019 17:15:16.251157  219832 cri.go:89] found id: ""
	I1019 17:15:16.251184  219832 logs.go:282] 0 containers: []
	W1019 17:15:16.251194  219832 logs.go:284] No container was found matching "kube-proxy"
	I1019 17:15:16.251202  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1019 17:15:16.251264  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1019 17:15:16.280233  219832 cri.go:89] found id: "fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa"
	I1019 17:15:16.280260  219832 cri.go:89] found id: ""
	I1019 17:15:16.280270  219832 logs.go:282] 1 containers: [fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa]
	I1019 17:15:16.280332  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:15:16.284541  219832 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1019 17:15:16.284611  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1019 17:15:16.313991  219832 cri.go:89] found id: ""
	I1019 17:15:16.314019  219832 logs.go:282] 0 containers: []
	W1019 17:15:16.314030  219832 logs.go:284] No container was found matching "kindnet"
	I1019 17:15:16.314038  219832 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1019 17:15:16.314110  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1019 17:15:16.343788  219832 cri.go:89] found id: ""
	I1019 17:15:16.343823  219832 logs.go:282] 0 containers: []
	W1019 17:15:16.343833  219832 logs.go:284] No container was found matching "storage-provisioner"
	I1019 17:15:16.343845  219832 logs.go:123] Gathering logs for dmesg ...
	I1019 17:15:16.343861  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1019 17:15:16.359868  219832 logs.go:123] Gathering logs for describe nodes ...
	I1019 17:15:16.359895  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1019 17:15:16.423803  219832 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1019 17:15:16.423829  219832 logs.go:123] Gathering logs for kube-apiserver [9069a5be85580c182da77b88270f9798850d1b399012f0273ac8911e2aca89dc] ...
	I1019 17:15:16.423843  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9069a5be85580c182da77b88270f9798850d1b399012f0273ac8911e2aca89dc"
	I1019 17:15:16.458933  219832 logs.go:123] Gathering logs for kube-scheduler [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f] ...
	I1019 17:15:16.458966  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:15:16.513239  219832 logs.go:123] Gathering logs for kube-controller-manager [fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa] ...
	I1019 17:15:16.513274  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa"
	I1019 17:15:16.543212  219832 logs.go:123] Gathering logs for CRI-O ...
	I1019 17:15:16.543250  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1019 17:15:16.594545  219832 logs.go:123] Gathering logs for container status ...
	I1019 17:15:16.594580  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1019 17:15:16.632790  219832 logs.go:123] Gathering logs for kubelet ...
	I1019 17:15:16.632820  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1019 17:15:14.934425  251026 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1019 17:15:14.934698  251026 start.go:159] libmachine.API.Create for "embed-certs-090139" (driver="docker")
	I1019 17:15:14.934733  251026 client.go:171] LocalClient.Create starting
	I1019 17:15:14.934818  251026 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem
	I1019 17:15:14.934857  251026 main.go:143] libmachine: Decoding PEM data...
	I1019 17:15:14.934880  251026 main.go:143] libmachine: Parsing certificate...
	I1019 17:15:14.934964  251026 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-3731/.minikube/certs/cert.pem
	I1019 17:15:14.934990  251026 main.go:143] libmachine: Decoding PEM data...
	I1019 17:15:14.935017  251026 main.go:143] libmachine: Parsing certificate...
	I1019 17:15:14.935472  251026 cli_runner.go:164] Run: docker network inspect embed-certs-090139 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1019 17:15:14.954362  251026 cli_runner.go:211] docker network inspect embed-certs-090139 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1019 17:15:14.954486  251026 network_create.go:284] running [docker network inspect embed-certs-090139] to gather additional debugging logs...
	I1019 17:15:14.954510  251026 cli_runner.go:164] Run: docker network inspect embed-certs-090139
	W1019 17:15:14.973209  251026 cli_runner.go:211] docker network inspect embed-certs-090139 returned with exit code 1
	I1019 17:15:14.973240  251026 network_create.go:287] error running [docker network inspect embed-certs-090139]: docker network inspect embed-certs-090139: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-090139 not found
	I1019 17:15:14.973263  251026 network_create.go:289] output of [docker network inspect embed-certs-090139]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-090139 not found
	
	** /stderr **
	I1019 17:15:14.973345  251026 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 17:15:14.992660  251026 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-96cf7041f267 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:7a:ea:91:e3:37:25} reservation:<nil>}
	I1019 17:15:14.993386  251026 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-0f2c415cfca9 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:96:f0:8a:e9:5f:de} reservation:<nil>}
	I1019 17:15:14.994106  251026 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ca739aebb768 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:a6:81:0d:b3:5e:ec} reservation:<nil>}
	I1019 17:15:14.994703  251026 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-73bac96357aa IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:fe:58:13:5a:d3:70} reservation:<nil>}
	I1019 17:15:14.995279  251026 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-cfc938debfe6 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:ce:b6:4f:40:e7:1c} reservation:<nil>}
	I1019 17:15:14.995698  251026 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-40c59d31eea2 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:12:d3:5e:b8:e0:1d} reservation:<nil>}
	I1019 17:15:14.996505  251026 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f22780}
	I1019 17:15:14.996542  251026 network_create.go:124] attempt to create docker network embed-certs-090139 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1019 17:15:14.996597  251026 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-090139 embed-certs-090139
	I1019 17:15:15.059936  251026 network_create.go:108] docker network embed-certs-090139 192.168.103.0/24 created
	I1019 17:15:15.059973  251026 kic.go:121] calculated static IP "192.168.103.2" for the "embed-certs-090139" container
	I1019 17:15:15.060039  251026 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1019 17:15:15.080111  251026 cli_runner.go:164] Run: docker volume create embed-certs-090139 --label name.minikube.sigs.k8s.io=embed-certs-090139 --label created_by.minikube.sigs.k8s.io=true
	I1019 17:15:15.101182  251026 oci.go:103] Successfully created a docker volume embed-certs-090139
	I1019 17:15:15.101306  251026 cli_runner.go:164] Run: docker run --rm --name embed-certs-090139-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-090139 --entrypoint /usr/bin/test -v embed-certs-090139:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1019 17:15:15.526386  251026 oci.go:107] Successfully prepared a docker volume embed-certs-090139
	I1019 17:15:15.526425  251026 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:15:15.526451  251026 kic.go:194] Starting extracting preloaded images to volume ...
	I1019 17:15:15.526525  251026 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-3731/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-090139:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	W1019 17:15:19.954157  245899 pod_ready.go:104] pod "coredns-66bc5c9577-s4dxw" is not "Ready", error: <nil>
	W1019 17:15:22.457295  245899 pod_ready.go:104] pod "coredns-66bc5c9577-s4dxw" is not "Ready", error: <nil>
	I1019 17:15:19.238127  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:15:19.238618  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1019 17:15:19.238682  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1019 17:15:19.238726  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1019 17:15:19.266311  219832 cri.go:89] found id: "9069a5be85580c182da77b88270f9798850d1b399012f0273ac8911e2aca89dc"
	I1019 17:15:19.266338  219832 cri.go:89] found id: ""
	I1019 17:15:19.266348  219832 logs.go:282] 1 containers: [9069a5be85580c182da77b88270f9798850d1b399012f0273ac8911e2aca89dc]
	I1019 17:15:19.266414  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:15:19.270728  219832 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1019 17:15:19.270790  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1019 17:15:19.298278  219832 cri.go:89] found id: ""
	I1019 17:15:19.298301  219832 logs.go:282] 0 containers: []
	W1019 17:15:19.298309  219832 logs.go:284] No container was found matching "etcd"
	I1019 17:15:19.298314  219832 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1019 17:15:19.298364  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1019 17:15:19.326745  219832 cri.go:89] found id: ""
	I1019 17:15:19.326775  219832 logs.go:282] 0 containers: []
	W1019 17:15:19.326787  219832 logs.go:284] No container was found matching "coredns"
	I1019 17:15:19.326794  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1019 17:15:19.326854  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1019 17:15:19.354905  219832 cri.go:89] found id: "409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:15:19.354926  219832 cri.go:89] found id: ""
	I1019 17:15:19.354933  219832 logs.go:282] 1 containers: [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f]
	I1019 17:15:19.354980  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:15:19.359441  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1019 17:15:19.359508  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1019 17:15:19.389346  219832 cri.go:89] found id: ""
	I1019 17:15:19.389383  219832 logs.go:282] 0 containers: []
	W1019 17:15:19.389395  219832 logs.go:284] No container was found matching "kube-proxy"
	I1019 17:15:19.389403  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1019 17:15:19.389463  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1019 17:15:19.418590  219832 cri.go:89] found id: "fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa"
	I1019 17:15:19.418610  219832 cri.go:89] found id: ""
	I1019 17:15:19.418618  219832 logs.go:282] 1 containers: [fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa]
	I1019 17:15:19.418684  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:15:19.423198  219832 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1019 17:15:19.423269  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1019 17:15:19.452003  219832 cri.go:89] found id: ""
	I1019 17:15:19.452029  219832 logs.go:282] 0 containers: []
	W1019 17:15:19.452040  219832 logs.go:284] No container was found matching "kindnet"
	I1019 17:15:19.452048  219832 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1019 17:15:19.452117  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1019 17:15:19.480534  219832 cri.go:89] found id: ""
	I1019 17:15:19.480570  219832 logs.go:282] 0 containers: []
	W1019 17:15:19.480580  219832 logs.go:284] No container was found matching "storage-provisioner"
	I1019 17:15:19.480592  219832 logs.go:123] Gathering logs for kube-scheduler [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f] ...
	I1019 17:15:19.480608  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:15:19.533388  219832 logs.go:123] Gathering logs for kube-controller-manager [fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa] ...
	I1019 17:15:19.533426  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa"
	I1019 17:15:19.562013  219832 logs.go:123] Gathering logs for CRI-O ...
	I1019 17:15:19.562040  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1019 17:15:19.617533  219832 logs.go:123] Gathering logs for container status ...
	I1019 17:15:19.617569  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1019 17:15:19.649762  219832 logs.go:123] Gathering logs for kubelet ...
	I1019 17:15:19.649788  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1019 17:15:19.738452  219832 logs.go:123] Gathering logs for dmesg ...
	I1019 17:15:19.738490  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1019 17:15:19.754204  219832 logs.go:123] Gathering logs for describe nodes ...
	I1019 17:15:19.754238  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1019 17:15:19.814233  219832 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1019 17:15:19.814252  219832 logs.go:123] Gathering logs for kube-apiserver [9069a5be85580c182da77b88270f9798850d1b399012f0273ac8911e2aca89dc] ...
	I1019 17:15:19.814268  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9069a5be85580c182da77b88270f9798850d1b399012f0273ac8911e2aca89dc"
	I1019 17:15:22.347422  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:15:22.347850  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1019 17:15:22.347905  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1019 17:15:22.347964  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1019 17:15:22.381054  219832 cri.go:89] found id: "9069a5be85580c182da77b88270f9798850d1b399012f0273ac8911e2aca89dc"
	I1019 17:15:22.381097  219832 cri.go:89] found id: ""
	I1019 17:15:22.381113  219832 logs.go:282] 1 containers: [9069a5be85580c182da77b88270f9798850d1b399012f0273ac8911e2aca89dc]
	I1019 17:15:22.381178  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:15:22.386755  219832 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1019 17:15:22.386824  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1019 17:15:22.418110  219832 cri.go:89] found id: ""
	I1019 17:15:22.418133  219832 logs.go:282] 0 containers: []
	W1019 17:15:22.418141  219832 logs.go:284] No container was found matching "etcd"
	I1019 17:15:22.418146  219832 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1019 17:15:22.418198  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1019 17:15:22.449584  219832 cri.go:89] found id: ""
	I1019 17:15:22.449610  219832 logs.go:282] 0 containers: []
	W1019 17:15:22.449619  219832 logs.go:284] No container was found matching "coredns"
	I1019 17:15:22.449627  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1019 17:15:22.449690  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1019 17:15:22.484198  219832 cri.go:89] found id: "409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:15:22.484225  219832 cri.go:89] found id: ""
	I1019 17:15:22.484237  219832 logs.go:282] 1 containers: [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f]
	I1019 17:15:22.484294  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:15:22.489189  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1019 17:15:22.489250  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1019 17:15:22.518428  219832 cri.go:89] found id: ""
	I1019 17:15:22.518454  219832 logs.go:282] 0 containers: []
	W1019 17:15:22.518462  219832 logs.go:284] No container was found matching "kube-proxy"
	I1019 17:15:22.518468  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1019 17:15:22.518521  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1019 17:15:22.552140  219832 cri.go:89] found id: "fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa"
	I1019 17:15:22.552166  219832 cri.go:89] found id: ""
	I1019 17:15:22.552177  219832 logs.go:282] 1 containers: [fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa]
	I1019 17:15:22.552233  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:15:22.556414  219832 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1019 17:15:22.556481  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1019 17:15:22.587591  219832 cri.go:89] found id: ""
	I1019 17:15:22.587619  219832 logs.go:282] 0 containers: []
	W1019 17:15:22.587631  219832 logs.go:284] No container was found matching "kindnet"
	I1019 17:15:22.587639  219832 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1019 17:15:22.587696  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1019 17:15:22.620495  219832 cri.go:89] found id: ""
	I1019 17:15:22.620521  219832 logs.go:282] 0 containers: []
	W1019 17:15:22.620531  219832 logs.go:284] No container was found matching "storage-provisioner"
	I1019 17:15:22.620541  219832 logs.go:123] Gathering logs for describe nodes ...
	I1019 17:15:22.620556  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1019 17:15:22.693954  219832 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1019 17:15:22.693978  219832 logs.go:123] Gathering logs for kube-apiserver [9069a5be85580c182da77b88270f9798850d1b399012f0273ac8911e2aca89dc] ...
	I1019 17:15:22.693993  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9069a5be85580c182da77b88270f9798850d1b399012f0273ac8911e2aca89dc"
	I1019 17:15:22.738876  219832 logs.go:123] Gathering logs for kube-scheduler [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f] ...
	I1019 17:15:22.738919  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:15:22.804166  219832 logs.go:123] Gathering logs for kube-controller-manager [fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa] ...
	I1019 17:15:22.804204  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa"
	I1019 17:15:22.843279  219832 logs.go:123] Gathering logs for CRI-O ...
	I1019 17:15:22.843312  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1019 17:15:22.895536  219832 logs.go:123] Gathering logs for container status ...
	I1019 17:15:22.895573  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1019 17:15:22.929549  219832 logs.go:123] Gathering logs for kubelet ...
	I1019 17:15:22.929589  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1019 17:15:23.045429  219832 logs.go:123] Gathering logs for dmesg ...
	I1019 17:15:23.045466  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1019 17:15:20.085324  251026 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-3731/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-090139:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.558752737s)
	I1019 17:15:20.085364  251026 kic.go:203] duration metric: took 4.558910628s to extract preloaded images to volume ...
	W1019 17:15:20.085467  251026 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1019 17:15:20.085504  251026 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1019 17:15:20.085552  251026 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1019 17:15:20.145764  251026 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-090139 --name embed-certs-090139 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-090139 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-090139 --network embed-certs-090139 --ip 192.168.103.2 --volume embed-certs-090139:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1019 17:15:20.429669  251026 cli_runner.go:164] Run: docker container inspect embed-certs-090139 --format={{.State.Running}}
	I1019 17:15:20.449537  251026 cli_runner.go:164] Run: docker container inspect embed-certs-090139 --format={{.State.Status}}
	I1019 17:15:20.469800  251026 cli_runner.go:164] Run: docker exec embed-certs-090139 stat /var/lib/dpkg/alternatives/iptables
	I1019 17:15:20.516685  251026 oci.go:144] the created container "embed-certs-090139" has a running status.
	I1019 17:15:20.516758  251026 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-3731/.minikube/machines/embed-certs-090139/id_rsa...
	I1019 17:15:20.679677  251026 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-3731/.minikube/machines/embed-certs-090139/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1019 17:15:20.710387  251026 cli_runner.go:164] Run: docker container inspect embed-certs-090139 --format={{.State.Status}}
	I1019 17:15:20.732716  251026 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1019 17:15:20.732743  251026 kic_runner.go:114] Args: [docker exec --privileged embed-certs-090139 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1019 17:15:20.789014  251026 cli_runner.go:164] Run: docker container inspect embed-certs-090139 --format={{.State.Status}}
	I1019 17:15:20.812619  251026 machine.go:94] provisionDockerMachine start ...
	I1019 17:15:20.812722  251026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-090139
	I1019 17:15:20.835671  251026 main.go:143] libmachine: Using SSH client type: native
	I1019 17:15:20.836009  251026 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33074 <nil> <nil>}
	I1019 17:15:20.836032  251026 main.go:143] libmachine: About to run SSH command:
	hostname
	I1019 17:15:20.975557  251026 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-090139
	
	I1019 17:15:20.975597  251026 ubuntu.go:182] provisioning hostname "embed-certs-090139"
	I1019 17:15:20.975672  251026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-090139
	I1019 17:15:20.996125  251026 main.go:143] libmachine: Using SSH client type: native
	I1019 17:15:20.996332  251026 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33074 <nil> <nil>}
	I1019 17:15:20.996347  251026 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-090139 && echo "embed-certs-090139" | sudo tee /etc/hostname
	I1019 17:15:21.145052  251026 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-090139
	
	I1019 17:15:21.145154  251026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-090139
	I1019 17:15:21.165671  251026 main.go:143] libmachine: Using SSH client type: native
	I1019 17:15:21.165930  251026 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33074 <nil> <nil>}
	I1019 17:15:21.165954  251026 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-090139' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-090139/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-090139' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 17:15:21.302201  251026 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1019 17:15:21.302226  251026 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-3731/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-3731/.minikube}
	I1019 17:15:21.302262  251026 ubuntu.go:190] setting up certificates
	I1019 17:15:21.302279  251026 provision.go:84] configureAuth start
	I1019 17:15:21.302350  251026 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-090139
	I1019 17:15:21.321982  251026 provision.go:143] copyHostCerts
	I1019 17:15:21.322050  251026 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-3731/.minikube/ca.pem, removing ...
	I1019 17:15:21.322093  251026 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-3731/.minikube/ca.pem
	I1019 17:15:21.322178  251026 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-3731/.minikube/ca.pem (1082 bytes)
	I1019 17:15:21.322304  251026 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-3731/.minikube/cert.pem, removing ...
	I1019 17:15:21.322317  251026 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-3731/.minikube/cert.pem
	I1019 17:15:21.322359  251026 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-3731/.minikube/cert.pem (1123 bytes)
	I1019 17:15:21.322443  251026 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-3731/.minikube/key.pem, removing ...
	I1019 17:15:21.322454  251026 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-3731/.minikube/key.pem
	I1019 17:15:21.322490  251026 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-3731/.minikube/key.pem (1679 bytes)
	I1019 17:15:21.322562  251026 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-3731/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca-key.pem org=jenkins.embed-certs-090139 san=[127.0.0.1 192.168.103.2 embed-certs-090139 localhost minikube]
	I1019 17:15:21.554655  251026 provision.go:177] copyRemoteCerts
	I1019 17:15:21.554718  251026 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 17:15:21.554756  251026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-090139
	I1019 17:15:21.574049  251026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/embed-certs-090139/id_rsa Username:docker}
	I1019 17:15:21.673466  251026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 17:15:21.694318  251026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1019 17:15:21.715248  251026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 17:15:21.733349  251026 provision.go:87] duration metric: took 431.05374ms to configureAuth
	I1019 17:15:21.733384  251026 ubuntu.go:206] setting minikube options for container-runtime
	I1019 17:15:21.733607  251026 config.go:182] Loaded profile config "embed-certs-090139": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:15:21.733786  251026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-090139
	I1019 17:15:21.752290  251026 main.go:143] libmachine: Using SSH client type: native
	I1019 17:15:21.752494  251026 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33074 <nil> <nil>}
	I1019 17:15:21.752512  251026 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 17:15:22.002937  251026 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 17:15:22.002984  251026 machine.go:97] duration metric: took 1.190326989s to provisionDockerMachine
	I1019 17:15:22.002999  251026 client.go:174] duration metric: took 7.068259055s to LocalClient.Create
	I1019 17:15:22.003029  251026 start.go:167] duration metric: took 7.068331932s to libmachine.API.Create "embed-certs-090139"
	I1019 17:15:22.003046  251026 start.go:293] postStartSetup for "embed-certs-090139" (driver="docker")
	I1019 17:15:22.003062  251026 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 17:15:22.003178  251026 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 17:15:22.003226  251026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-090139
	I1019 17:15:22.022039  251026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/embed-certs-090139/id_rsa Username:docker}
	I1019 17:15:22.123010  251026 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 17:15:22.127283  251026 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 17:15:22.127313  251026 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 17:15:22.127326  251026 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-3731/.minikube/addons for local assets ...
	I1019 17:15:22.127386  251026 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-3731/.minikube/files for local assets ...
	I1019 17:15:22.127483  251026 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-3731/.minikube/files/etc/ssl/certs/72282.pem -> 72282.pem in /etc/ssl/certs
	I1019 17:15:22.127618  251026 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 17:15:22.136014  251026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/files/etc/ssl/certs/72282.pem --> /etc/ssl/certs/72282.pem (1708 bytes)
	I1019 17:15:22.159351  251026 start.go:296] duration metric: took 156.288638ms for postStartSetup
	I1019 17:15:22.159762  251026 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-090139
	I1019 17:15:22.179062  251026 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/embed-certs-090139/config.json ...
	I1019 17:15:22.179438  251026 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 17:15:22.179490  251026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-090139
	I1019 17:15:22.197485  251026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/embed-certs-090139/id_rsa Username:docker}
	I1019 17:15:22.292805  251026 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 17:15:22.298380  251026 start.go:128] duration metric: took 7.366713724s to createHost
	I1019 17:15:22.298409  251026 start.go:83] releasing machines lock for "embed-certs-090139", held for 7.366858572s
	I1019 17:15:22.298485  251026 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-090139
	I1019 17:15:22.318657  251026 ssh_runner.go:195] Run: cat /version.json
	I1019 17:15:22.318724  251026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-090139
	I1019 17:15:22.318743  251026 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 17:15:22.318806  251026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-090139
	I1019 17:15:22.338469  251026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/embed-certs-090139/id_rsa Username:docker}
	I1019 17:15:22.339812  251026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/embed-certs-090139/id_rsa Username:docker}
	I1019 17:15:22.437526  251026 ssh_runner.go:195] Run: systemctl --version
	I1019 17:15:22.505136  251026 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 17:15:22.546609  251026 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 17:15:22.552512  251026 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 17:15:22.552589  251026 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 17:15:22.584303  251026 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1019 17:15:22.584332  251026 start.go:496] detecting cgroup driver to use...
	I1019 17:15:22.584370  251026 detect.go:190] detected "systemd" cgroup driver on host os
	I1019 17:15:22.584420  251026 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 17:15:22.603536  251026 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 17:15:22.621091  251026 docker.go:218] disabling cri-docker service (if available) ...
	I1019 17:15:22.621153  251026 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 17:15:22.640749  251026 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 17:15:22.662635  251026 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 17:15:22.778522  251026 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 17:15:22.878790  251026 docker.go:234] disabling docker service ...
	I1019 17:15:22.878862  251026 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 17:15:22.899357  251026 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 17:15:22.913445  251026 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 17:15:23.018603  251026 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 17:15:23.116655  251026 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 17:15:23.130751  251026 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 17:15:23.146643  251026 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 17:15:23.146715  251026 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:15:23.158103  251026 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1019 17:15:23.158174  251026 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:15:23.168093  251026 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:15:23.179185  251026 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:15:23.189138  251026 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 17:15:23.199467  251026 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:15:23.209497  251026 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:15:23.226167  251026 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:15:23.235582  251026 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 17:15:23.243700  251026 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 17:15:23.251605  251026 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:15:23.337262  251026 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 17:15:23.451851  251026 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 17:15:23.451919  251026 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 17:15:23.456483  251026 start.go:564] Will wait 60s for crictl version
	I1019 17:15:23.456548  251026 ssh_runner.go:195] Run: which crictl
	I1019 17:15:23.460493  251026 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 17:15:23.487160  251026 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 17:15:23.487241  251026 ssh_runner.go:195] Run: crio --version
	I1019 17:15:23.517832  251026 ssh_runner.go:195] Run: crio --version
	I1019 17:15:23.548698  251026 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1019 17:15:23.550011  251026 cli_runner.go:164] Run: docker network inspect embed-certs-090139 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 17:15:23.569122  251026 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1019 17:15:23.573653  251026 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:15:23.586171  251026 kubeadm.go:884] updating cluster {Name:embed-certs-090139 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-090139 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 17:15:23.586307  251026 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:15:23.586359  251026 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:15:23.629543  251026 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:15:23.629577  251026 crio.go:433] Images already preloaded, skipping extraction
	I1019 17:15:23.629637  251026 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:15:23.657766  251026 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:15:23.657789  251026 cache_images.go:86] Images are preloaded, skipping loading
	I1019 17:15:23.657799  251026 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1019 17:15:23.657896  251026 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-090139 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-090139 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 17:15:23.657993  251026 ssh_runner.go:195] Run: crio config
	I1019 17:15:23.711682  251026 cni.go:84] Creating CNI manager for ""
	I1019 17:15:23.711734  251026 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:15:23.711751  251026 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 17:15:23.711771  251026 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-090139 NodeName:embed-certs-090139 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 17:15:23.711903  251026 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-090139"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 17:15:23.711973  251026 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 17:15:23.720816  251026 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 17:15:23.720893  251026 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 17:15:23.729730  251026 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I1019 17:15:23.744357  251026 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 17:15:23.761329  251026 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1019 17:15:23.776293  251026 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1019 17:15:23.780853  251026 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:15:23.793006  251026 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:15:23.879695  251026 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:15:23.915880  251026 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/embed-certs-090139 for IP: 192.168.103.2
	I1019 17:15:23.915902  251026 certs.go:195] generating shared ca certs ...
	I1019 17:15:23.915918  251026 certs.go:227] acquiring lock for ca certs: {Name:mk4c4a2dfd94a54e3626e99fce4b6f5183eeaf4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:15:23.916099  251026 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-3731/.minikube/ca.key
	I1019 17:15:23.916142  251026 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-3731/.minikube/proxy-client-ca.key
	I1019 17:15:23.916151  251026 certs.go:257] generating profile certs ...
	I1019 17:15:23.916210  251026 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/embed-certs-090139/client.key
	I1019 17:15:23.916228  251026 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/embed-certs-090139/client.crt with IP's: []
	I1019 17:15:23.994616  251026 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/embed-certs-090139/client.crt ...
	I1019 17:15:23.994647  251026 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/embed-certs-090139/client.crt: {Name:mk5b0ad2a9e5bc2fcda176fa53af3350d10462e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:15:23.994966  251026 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/embed-certs-090139/client.key ...
	I1019 17:15:23.994991  251026 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/embed-certs-090139/client.key: {Name:mkd86b0c314959b8e88f43d0af08f937c5cbe956 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:15:23.995109  251026 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/embed-certs-090139/apiserver.key.40868374
	I1019 17:15:23.995125  251026 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/embed-certs-090139/apiserver.crt.40868374 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1019 17:15:24.334154  251026 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/embed-certs-090139/apiserver.crt.40868374 ...
	I1019 17:15:24.334184  251026 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/embed-certs-090139/apiserver.crt.40868374: {Name:mk20fe03a8c98b5c6934b8d43bbc7d300f7be4a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:15:24.334345  251026 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/embed-certs-090139/apiserver.key.40868374 ...
	I1019 17:15:24.334358  251026 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/embed-certs-090139/apiserver.key.40868374: {Name:mk71a2678fb5c001b09fa91c9eec993482b058f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:15:24.334427  251026 certs.go:382] copying /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/embed-certs-090139/apiserver.crt.40868374 -> /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/embed-certs-090139/apiserver.crt
	I1019 17:15:24.334502  251026 certs.go:386] copying /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/embed-certs-090139/apiserver.key.40868374 -> /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/embed-certs-090139/apiserver.key
	I1019 17:15:24.334558  251026 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/embed-certs-090139/proxy-client.key
	I1019 17:15:24.334574  251026 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/embed-certs-090139/proxy-client.crt with IP's: []
	I1019 17:15:24.528305  251026 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/embed-certs-090139/proxy-client.crt ...
	I1019 17:15:24.528332  251026 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/embed-certs-090139/proxy-client.crt: {Name:mk0bc06ac6366f3e90200e24d483840c29c7f209 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:15:24.528503  251026 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/embed-certs-090139/proxy-client.key ...
	I1019 17:15:24.528525  251026 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/embed-certs-090139/proxy-client.key: {Name:mk8430ddda2f85e9c315d59524911ccfa2efc9e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:15:24.528754  251026 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/7228.pem (1338 bytes)
	W1019 17:15:24.528806  251026 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-3731/.minikube/certs/7228_empty.pem, impossibly tiny 0 bytes
	I1019 17:15:24.528821  251026 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca-key.pem (1675 bytes)
	I1019 17:15:24.528853  251026 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem (1082 bytes)
	I1019 17:15:24.528889  251026 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/cert.pem (1123 bytes)
	I1019 17:15:24.528923  251026 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/key.pem (1679 bytes)
	I1019 17:15:24.528988  251026 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/files/etc/ssl/certs/72282.pem (1708 bytes)
	I1019 17:15:24.529643  251026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 17:15:24.550584  251026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1019 17:15:24.571620  251026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 17:15:24.593312  251026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1019 17:15:24.612383  251026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/embed-certs-090139/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1019 17:15:24.632776  251026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/embed-certs-090139/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1019 17:15:24.654315  251026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/embed-certs-090139/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 17:15:24.675290  251026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/embed-certs-090139/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1019 17:15:24.697930  251026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 17:15:24.722524  251026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/certs/7228.pem --> /usr/share/ca-certificates/7228.pem (1338 bytes)
	I1019 17:15:24.743770  251026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/files/etc/ssl/certs/72282.pem --> /usr/share/ca-certificates/72282.pem (1708 bytes)
	I1019 17:15:24.762776  251026 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 17:15:24.776859  251026 ssh_runner.go:195] Run: openssl version
	I1019 17:15:24.783430  251026 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 17:15:24.792719  251026 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:15:24.796750  251026 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 16:21 /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:15:24.796833  251026 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:15:24.831686  251026 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 17:15:24.841390  251026 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7228.pem && ln -fs /usr/share/ca-certificates/7228.pem /etc/ssl/certs/7228.pem"
	I1019 17:15:24.850772  251026 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7228.pem
	I1019 17:15:24.855245  251026 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 16:26 /usr/share/ca-certificates/7228.pem
	I1019 17:15:24.855303  251026 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7228.pem
	I1019 17:15:24.891762  251026 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7228.pem /etc/ssl/certs/51391683.0"
	I1019 17:15:24.902413  251026 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/72282.pem && ln -fs /usr/share/ca-certificates/72282.pem /etc/ssl/certs/72282.pem"
	I1019 17:15:24.913360  251026 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/72282.pem
	I1019 17:15:24.918098  251026 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 16:26 /usr/share/ca-certificates/72282.pem
	I1019 17:15:24.918182  251026 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/72282.pem
	I1019 17:15:24.958895  251026 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/72282.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 17:15:24.968720  251026 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 17:15:24.973165  251026 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1019 17:15:24.973235  251026 kubeadm.go:401] StartCluster: {Name:embed-certs-090139 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-090139 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:15:24.973318  251026 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 17:15:24.973391  251026 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 17:15:25.005516  251026 cri.go:89] found id: ""
	I1019 17:15:25.005595  251026 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 17:15:25.015086  251026 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1019 17:15:25.024127  251026 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1019 17:15:25.024196  251026 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1019 17:15:25.032915  251026 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1019 17:15:25.032933  251026 kubeadm.go:158] found existing configuration files:
	
	I1019 17:15:25.032981  251026 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1019 17:15:25.041819  251026 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1019 17:15:25.041899  251026 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1019 17:15:25.050392  251026 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1019 17:15:25.059032  251026 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1019 17:15:25.059126  251026 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1019 17:15:25.067857  251026 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1019 17:15:25.076816  251026 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1019 17:15:25.076868  251026 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1019 17:15:25.085725  251026 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1019 17:15:25.096146  251026 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1019 17:15:25.096214  251026 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1019 17:15:25.105905  251026 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1019 17:15:25.152406  251026 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1019 17:15:25.152480  251026 kubeadm.go:319] [preflight] Running pre-flight checks
	I1019 17:15:25.173778  251026 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1019 17:15:25.173864  251026 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1041-gcp
	I1019 17:15:25.173908  251026 kubeadm.go:319] OS: Linux
	I1019 17:15:25.174655  251026 kubeadm.go:319] CGROUPS_CPU: enabled
	I1019 17:15:25.174752  251026 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1019 17:15:25.174827  251026 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1019 17:15:25.174901  251026 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1019 17:15:25.174967  251026 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1019 17:15:25.175036  251026 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1019 17:15:25.175162  251026 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1019 17:15:25.175230  251026 kubeadm.go:319] CGROUPS_IO: enabled
	I1019 17:15:25.237657  251026 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1019 17:15:25.237823  251026 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1019 17:15:25.237957  251026 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1019 17:15:25.244904  251026 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	
	
	==> CRI-O <==
	Oct 19 17:14:47 old-k8s-version-904967 crio[569]: time="2025-10-19T17:14:47.489696137Z" level=info msg="Created container 1440d21cef285c712b1fd8cf829a2eb24f00c65d5e80452b50e3a10b8d8f3aa5: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-9tv62/kubernetes-dashboard" id=89c2af69-c0d2-4410-b05c-a716b9f64393 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:14:47 old-k8s-version-904967 crio[569]: time="2025-10-19T17:14:47.490426694Z" level=info msg="Starting container: 1440d21cef285c712b1fd8cf829a2eb24f00c65d5e80452b50e3a10b8d8f3aa5" id=d9be2313-4257-48c0-b5a6-17ecc510374a name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 17:14:47 old-k8s-version-904967 crio[569]: time="2025-10-19T17:14:47.492335274Z" level=info msg="Started container" PID=1723 containerID=1440d21cef285c712b1fd8cf829a2eb24f00c65d5e80452b50e3a10b8d8f3aa5 description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-9tv62/kubernetes-dashboard id=d9be2313-4257-48c0-b5a6-17ecc510374a name=/runtime.v1.RuntimeService/StartContainer sandboxID=625d5e3cb20e2607d17cbff1dd36a168fc565b75040170e8a6f9ecb3ca1b2906
	Oct 19 17:15:00 old-k8s-version-904967 crio[569]: time="2025-10-19T17:15:00.636329974Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=6dd0b37c-ab03-4814-90dc-674840fcded3 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:15:00 old-k8s-version-904967 crio[569]: time="2025-10-19T17:15:00.637516447Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a17c66b1-fea8-4323-9029-21e90f6be831 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:15:00 old-k8s-version-904967 crio[569]: time="2025-10-19T17:15:00.638530484Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=5bb61db0-0354-404b-a90a-5348ae7178cd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:15:00 old-k8s-version-904967 crio[569]: time="2025-10-19T17:15:00.638801851Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:15:00 old-k8s-version-904967 crio[569]: time="2025-10-19T17:15:00.645920248Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:15:00 old-k8s-version-904967 crio[569]: time="2025-10-19T17:15:00.646261466Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/32e8931698d4654f5d77ee3e9f59bddb6653af6c572c95d3d1ef2c4598e8ff6f/merged/etc/passwd: no such file or directory"
	Oct 19 17:15:00 old-k8s-version-904967 crio[569]: time="2025-10-19T17:15:00.646358522Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/32e8931698d4654f5d77ee3e9f59bddb6653af6c572c95d3d1ef2c4598e8ff6f/merged/etc/group: no such file or directory"
	Oct 19 17:15:00 old-k8s-version-904967 crio[569]: time="2025-10-19T17:15:00.646729508Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:15:00 old-k8s-version-904967 crio[569]: time="2025-10-19T17:15:00.685673941Z" level=info msg="Created container 6977bb31ffcd6d22facb3755db7eb620c00759ea8377876599a469f0fa5f01e1: kube-system/storage-provisioner/storage-provisioner" id=5bb61db0-0354-404b-a90a-5348ae7178cd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:15:00 old-k8s-version-904967 crio[569]: time="2025-10-19T17:15:00.686596381Z" level=info msg="Starting container: 6977bb31ffcd6d22facb3755db7eb620c00759ea8377876599a469f0fa5f01e1" id=9838e2cf-9e9d-4681-be50-70ca214bca08 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 17:15:00 old-k8s-version-904967 crio[569]: time="2025-10-19T17:15:00.689555574Z" level=info msg="Started container" PID=1745 containerID=6977bb31ffcd6d22facb3755db7eb620c00759ea8377876599a469f0fa5f01e1 description=kube-system/storage-provisioner/storage-provisioner id=9838e2cf-9e9d-4681-be50-70ca214bca08 name=/runtime.v1.RuntimeService/StartContainer sandboxID=79e6edbfdddc93ebc72ee3704b70e0eb166908a3fbc458f9c6874078b6fd34e4
	Oct 19 17:15:06 old-k8s-version-904967 crio[569]: time="2025-10-19T17:15:06.525959972Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e39600bc-391c-4444-a161-85f80d10e021 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:15:06 old-k8s-version-904967 crio[569]: time="2025-10-19T17:15:06.526939368Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=ea4bd173-c7ca-4aed-b615-31b2e7a226a2 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:15:06 old-k8s-version-904967 crio[569]: time="2025-10-19T17:15:06.528212918Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fw6d/dashboard-metrics-scraper" id=7fddbc13-cb7a-48fb-a1a9-ae06105acde2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:15:06 old-k8s-version-904967 crio[569]: time="2025-10-19T17:15:06.528479714Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:15:06 old-k8s-version-904967 crio[569]: time="2025-10-19T17:15:06.535166557Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:15:06 old-k8s-version-904967 crio[569]: time="2025-10-19T17:15:06.535885246Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:15:06 old-k8s-version-904967 crio[569]: time="2025-10-19T17:15:06.567244409Z" level=info msg="Created container d09ce49842899f8553d55483ba7991569651a6a48f0c338ad78e1055a5625a3d: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fw6d/dashboard-metrics-scraper" id=7fddbc13-cb7a-48fb-a1a9-ae06105acde2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:15:06 old-k8s-version-904967 crio[569]: time="2025-10-19T17:15:06.568531791Z" level=info msg="Starting container: d09ce49842899f8553d55483ba7991569651a6a48f0c338ad78e1055a5625a3d" id=b04dbbde-6799-4030-858c-3d705eab3b81 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 17:15:06 old-k8s-version-904967 crio[569]: time="2025-10-19T17:15:06.570785877Z" level=info msg="Started container" PID=1760 containerID=d09ce49842899f8553d55483ba7991569651a6a48f0c338ad78e1055a5625a3d description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fw6d/dashboard-metrics-scraper id=b04dbbde-6799-4030-858c-3d705eab3b81 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6c0423b95b0fde5c11431d46f3cc6d12d059a92672688085c4fac04c100a4fe4
	Oct 19 17:15:06 old-k8s-version-904967 crio[569]: time="2025-10-19T17:15:06.661138428Z" level=info msg="Removing container: c3da2a5bbdfc05533b919ca0a4ed929aaa6e3a4bb594c8189a7418a495b5b529" id=5d92c074-c31d-4926-8aaa-a67df0ad8c95 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 17:15:06 old-k8s-version-904967 crio[569]: time="2025-10-19T17:15:06.672236636Z" level=info msg="Removed container c3da2a5bbdfc05533b919ca0a4ed929aaa6e3a4bb594c8189a7418a495b5b529: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fw6d/dashboard-metrics-scraper" id=5d92c074-c31d-4926-8aaa-a67df0ad8c95 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	d09ce49842899       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           21 seconds ago       Exited              dashboard-metrics-scraper   2                   6c0423b95b0fd       dashboard-metrics-scraper-5f989dc9cf-7fw6d       kubernetes-dashboard
	6977bb31ffcd6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           26 seconds ago       Running             storage-provisioner         1                   79e6edbfdddc9       storage-provisioner                              kube-system
	1440d21cef285       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   40 seconds ago       Running             kubernetes-dashboard        0                   625d5e3cb20e2       kubernetes-dashboard-8694d4445c-9tv62            kubernetes-dashboard
	b30c1cf139693       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           57 seconds ago       Running             busybox                     1                   10bed606fc324       busybox                                          default
	ce94b6419b2c4       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           57 seconds ago       Running             coredns                     0                   40731cdf61bdd       coredns-5dd5756b68-qdvcm                         kube-system
	f5b580c231276       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           57 seconds ago       Exited              storage-provisioner         0                   79e6edbfdddc9       storage-provisioner                              kube-system
	55c6a978b088c       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           57 seconds ago       Running             kube-proxy                  0                   aafb0a20b0a84       kube-proxy-gr6m9                                 kube-system
	1cb477f3e2b8b       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           57 seconds ago       Running             kindnet-cni                 0                   01776bd797f2e       kindnet-lh8rm                                    kube-system
	d585a77a4eff3       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           About a minute ago   Running             kube-scheduler              0                   9aac900289460       kube-scheduler-old-k8s-version-904967            kube-system
	f8fee443a165e       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           About a minute ago   Running             etcd                        0                   c29781f8a9392       etcd-old-k8s-version-904967                      kube-system
	78ff50c78f7cc       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           About a minute ago   Running             kube-controller-manager     0                   9ab2d8754db1e       kube-controller-manager-old-k8s-version-904967   kube-system
	783eeba3fb702       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           About a minute ago   Running             kube-apiserver              0                   b058c33a5ecb8       kube-apiserver-old-k8s-version-904967            kube-system
	
	
	==> coredns [ce94b6419b2c4ac1db095de413bd1d82939921cfe884c2239c2cb800683b9fc5] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:55707 - 45089 "HINFO IN 8344470819792762176.7498521989286999521. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.468137845s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-904967
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-904967
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34
	                    minikube.k8s.io/name=old-k8s-version-904967
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T17_13_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 17:13:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-904967
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 17:15:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 17:14:59 +0000   Sun, 19 Oct 2025 17:13:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 17:14:59 +0000   Sun, 19 Oct 2025 17:13:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 17:14:59 +0000   Sun, 19 Oct 2025 17:13:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 17:14:59 +0000   Sun, 19 Oct 2025 17:13:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-904967
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                1f7bd5b5-08c8-4ce1-be37-64fa8f96d211
	  Boot ID:                    74c6737e-3c93-45ee-b810-4373d2d70b9b
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-5dd5756b68-qdvcm                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     113s
	  kube-system                 etcd-old-k8s-version-904967                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m6s
	  kube-system                 kindnet-lh8rm                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      113s
	  kube-system                 kube-apiserver-old-k8s-version-904967             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 kube-controller-manager-old-k8s-version-904967    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 kube-proxy-gr6m9                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-scheduler-old-k8s-version-904967             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-7fw6d        0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-9tv62             0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 112s               kube-proxy       
	  Normal  Starting                 57s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m6s               kubelet          Node old-k8s-version-904967 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m6s               kubelet          Node old-k8s-version-904967 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m6s               kubelet          Node old-k8s-version-904967 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m6s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           113s               node-controller  Node old-k8s-version-904967 event: Registered Node old-k8s-version-904967 in Controller
	  Normal  NodeReady                99s                kubelet          Node old-k8s-version-904967 status is now: NodeReady
	  Normal  Starting                 61s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  61s (x8 over 61s)  kubelet          Node old-k8s-version-904967 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s (x8 over 61s)  kubelet          Node old-k8s-version-904967 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x8 over 61s)  kubelet          Node old-k8s-version-904967 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           46s                node-controller  Node old-k8s-version-904967 event: Registered Node old-k8s-version-904967 in Controller
	
	
	==> dmesg <==
	[  +0.102617] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027869] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.458770] kauditd_printk_skb: 47 callbacks suppressed
	[Oct19 16:23] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.054620] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023908] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023878] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023848] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023943] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +2.047764] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +4.031517] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +8.127172] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[ +16.382276] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[Oct19 16:24] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	
	
	==> etcd [f8fee443a165e8c94dbca458d7be0af55ddfb347583a529bb18135d08cf99cda] <==
	{"level":"info","ts":"2025-10-19T17:14:27.094467Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-19T17:14:27.09543Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-10-19T17:14:27.096028Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-10-19T17:14:27.096235Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-19T17:14:27.096274Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-19T17:14:27.098119Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-19T17:14:27.098622Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-19T17:14:27.098652Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-19T17:14:27.098696Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-19T17:14:27.098706Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-19T17:14:28.087324Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-19T17:14:28.087374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-19T17:14:28.087422Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-19T17:14:28.087442Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-10-19T17:14:28.087449Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-19T17:14:28.087461Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-10-19T17:14:28.087473Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-19T17:14:28.089035Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-19T17:14:28.089032Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-904967 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-19T17:14:28.089099Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-19T17:14:28.089327Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-19T17:14:28.089384Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-19T17:14:28.091098Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-10-19T17:14:28.091488Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-19T17:15:20.043984Z","caller":"traceutil/trace.go:171","msg":"trace[1162046623] transaction","detail":"{read_only:false; response_revision:633; number_of_response:1; }","duration":"146.090649ms","start":"2025-10-19T17:15:19.897863Z","end":"2025-10-19T17:15:20.043954Z","steps":["trace[1162046623] 'process raft request'  (duration: 145.939967ms)"],"step_count":1}
	
	
	==> kernel <==
	 17:15:27 up 57 min,  0 user,  load average: 3.55, 2.84, 1.71
	Linux old-k8s-version-904967 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1cb477f3e2b8baf572ed7209b429278d823d78e9b46164608b3a173129ae017e] <==
	I1019 17:14:30.156177       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 17:14:30.156566       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1019 17:14:30.156751       1 main.go:148] setting mtu 1500 for CNI 
	I1019 17:14:30.156782       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 17:14:30.156811       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T17:14:30Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 17:14:30.453656       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 17:14:30.453754       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 17:14:30.453767       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 17:14:30.454249       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1019 17:14:30.850323       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 17:14:30.850358       1 metrics.go:72] Registering metrics
	I1019 17:14:30.850435       1 controller.go:711] "Syncing nftables rules"
	I1019 17:14:40.459679       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 17:14:40.459741       1 main.go:301] handling current node
	I1019 17:14:50.453653       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 17:14:50.453697       1 main.go:301] handling current node
	I1019 17:15:00.454419       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 17:15:00.454542       1 main.go:301] handling current node
	I1019 17:15:10.454190       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 17:15:10.454236       1 main.go:301] handling current node
	I1019 17:15:20.460176       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 17:15:20.460217       1 main.go:301] handling current node
	
	
	==> kube-apiserver [783eeba3fb702b2ab824254b8901f2f139f59ef0c6c596fed9712ff31faef63f] <==
	I1019 17:14:29.135155       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1019 17:14:29.142758       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1019 17:14:29.142774       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1019 17:14:29.142790       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1019 17:14:29.142885       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1019 17:14:29.142970       1 aggregator.go:166] initial CRD sync complete...
	I1019 17:14:29.142980       1 autoregister_controller.go:141] Starting autoregister controller
	I1019 17:14:29.142993       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1019 17:14:29.143000       1 cache.go:39] Caches are synced for autoregister controller
	I1019 17:14:29.143389       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1019 17:14:29.145004       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1019 17:14:29.155409       1 shared_informer.go:318] Caches are synced for configmaps
	I1019 17:14:30.045144       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 17:14:30.213109       1 controller.go:624] quota admission added evaluator for: namespaces
	I1019 17:14:30.248462       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1019 17:14:30.275469       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 17:14:30.285240       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 17:14:30.293036       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1019 17:14:30.331367       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.36.65"}
	I1019 17:14:30.346681       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.74.254"}
	I1019 17:14:41.681377       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 17:14:41.681419       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 17:14:41.732275       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1019 17:14:41.782767       1 controller.go:624] quota admission added evaluator for: endpoints
	I1019 17:14:41.782767       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [78ff50c78f7cce6ccee8c1e7478bfa6937ce35b306cb412c85a9d2a83a64face] <==
	I1019 17:14:41.635829       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="120.261µs"
	I1019 17:14:41.736935       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1019 17:14:41.737007       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1019 17:14:41.744502       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-9tv62"
	I1019 17:14:41.745544       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-7fw6d"
	I1019 17:14:41.756211       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="19.597133ms"
	I1019 17:14:41.756427       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="19.849265ms"
	I1019 17:14:41.763648       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="7.383429ms"
	I1019 17:14:41.763735       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="48.009µs"
	I1019 17:14:41.763651       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="7.173678ms"
	I1019 17:14:41.763783       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="26.033µs"
	I1019 17:14:41.770030       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="94.12µs"
	I1019 17:14:41.779999       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="85.96µs"
	I1019 17:14:41.828726       1 shared_informer.go:318] Caches are synced for garbage collector
	I1019 17:14:41.880982       1 shared_informer.go:318] Caches are synced for garbage collector
	I1019 17:14:41.881011       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1019 17:14:44.597795       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="113.873µs"
	I1019 17:14:45.603921       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="126.675µs"
	I1019 17:14:46.620466       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="85µs"
	I1019 17:14:47.617258       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="7.208632ms"
	I1019 17:14:47.617559       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="91.02µs"
	I1019 17:15:06.677694       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="78.903µs"
	I1019 17:15:09.033605       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="8.504026ms"
	I1019 17:15:09.033775       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="100.443µs"
	I1019 17:15:12.089648       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="118.765µs"
	
	
	==> kube-proxy [55c6a978b088cdf7358bab39ddcabd75fc5780747290a484f984a56f7a86398c] <==
	I1019 17:14:29.967681       1 server_others.go:69] "Using iptables proxy"
	I1019 17:14:29.977442       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1019 17:14:29.995981       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 17:14:29.998518       1 server_others.go:152] "Using iptables Proxier"
	I1019 17:14:29.998561       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1019 17:14:29.998570       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1019 17:14:29.998613       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1019 17:14:29.998866       1 server.go:846] "Version info" version="v1.28.0"
	I1019 17:14:29.998889       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:14:29.999671       1 config.go:315] "Starting node config controller"
	I1019 17:14:29.999736       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1019 17:14:29.999844       1 config.go:188] "Starting service config controller"
	I1019 17:14:29.999884       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1019 17:14:29.999967       1 config.go:97] "Starting endpoint slice config controller"
	I1019 17:14:29.999974       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1019 17:14:30.099918       1 shared_informer.go:318] Caches are synced for node config
	I1019 17:14:30.101012       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1019 17:14:30.101025       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [d585a77a4eff398d568fbaf843dc59dc0a8f11ceece1172b1b6499be37a6bc8c] <==
	I1019 17:14:27.652494       1 serving.go:348] Generated self-signed cert in-memory
	W1019 17:14:29.052545       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1019 17:14:29.056132       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1019 17:14:29.056163       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1019 17:14:29.056173       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1019 17:14:29.094016       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1019 17:14:29.095464       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:14:29.098137       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 17:14:29.098190       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1019 17:14:29.098848       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1019 17:14:29.099005       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1019 17:14:29.198367       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 19 17:14:41 old-k8s-version-904967 kubelet[730]: I1019 17:14:41.870947     730 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/bf42ac24-dcdc-400d-a17f-b022ff5102f1-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-9tv62\" (UID: \"bf42ac24-dcdc-400d-a17f-b022ff5102f1\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-9tv62"
	Oct 19 17:14:41 old-k8s-version-904967 kubelet[730]: I1019 17:14:41.871003     730 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2b82\" (UniqueName: \"kubernetes.io/projected/93067392-e0af-4f62-9b02-cbe31f8c0617-kube-api-access-x2b82\") pod \"dashboard-metrics-scraper-5f989dc9cf-7fw6d\" (UID: \"93067392-e0af-4f62-9b02-cbe31f8c0617\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fw6d"
	Oct 19 17:14:41 old-k8s-version-904967 kubelet[730]: I1019 17:14:41.871028     730 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnztp\" (UniqueName: \"kubernetes.io/projected/bf42ac24-dcdc-400d-a17f-b022ff5102f1-kube-api-access-hnztp\") pod \"kubernetes-dashboard-8694d4445c-9tv62\" (UID: \"bf42ac24-dcdc-400d-a17f-b022ff5102f1\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-9tv62"
	Oct 19 17:14:41 old-k8s-version-904967 kubelet[730]: I1019 17:14:41.871048     730 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/93067392-e0af-4f62-9b02-cbe31f8c0617-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-7fw6d\" (UID: \"93067392-e0af-4f62-9b02-cbe31f8c0617\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fw6d"
	Oct 19 17:14:44 old-k8s-version-904967 kubelet[730]: I1019 17:14:44.585943     730 scope.go:117] "RemoveContainer" containerID="84eff4894eed7f2967cdf92e4d59963de9c70d8d75b7be73cc32bff3b3f5d867"
	Oct 19 17:14:45 old-k8s-version-904967 kubelet[730]: I1019 17:14:45.590229     730 scope.go:117] "RemoveContainer" containerID="84eff4894eed7f2967cdf92e4d59963de9c70d8d75b7be73cc32bff3b3f5d867"
	Oct 19 17:14:45 old-k8s-version-904967 kubelet[730]: I1019 17:14:45.590415     730 scope.go:117] "RemoveContainer" containerID="c3da2a5bbdfc05533b919ca0a4ed929aaa6e3a4bb594c8189a7418a495b5b529"
	Oct 19 17:14:45 old-k8s-version-904967 kubelet[730]: E1019 17:14:45.590810     730 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-7fw6d_kubernetes-dashboard(93067392-e0af-4f62-9b02-cbe31f8c0617)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fw6d" podUID="93067392-e0af-4f62-9b02-cbe31f8c0617"
	Oct 19 17:14:46 old-k8s-version-904967 kubelet[730]: I1019 17:14:46.594587     730 scope.go:117] "RemoveContainer" containerID="c3da2a5bbdfc05533b919ca0a4ed929aaa6e3a4bb594c8189a7418a495b5b529"
	Oct 19 17:14:46 old-k8s-version-904967 kubelet[730]: E1019 17:14:46.595004     730 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-7fw6d_kubernetes-dashboard(93067392-e0af-4f62-9b02-cbe31f8c0617)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fw6d" podUID="93067392-e0af-4f62-9b02-cbe31f8c0617"
	Oct 19 17:14:47 old-k8s-version-904967 kubelet[730]: I1019 17:14:47.610537     730 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-9tv62" podStartSLOduration=1.2342846729999999 podCreationTimestamp="2025-10-19 17:14:41 +0000 UTC" firstStartedPulling="2025-10-19 17:14:42.077376343 +0000 UTC m=+15.649483445" lastFinishedPulling="2025-10-19 17:14:47.453558319 +0000 UTC m=+21.025665424" observedRunningTime="2025-10-19 17:14:47.609796966 +0000 UTC m=+21.181904073" watchObservedRunningTime="2025-10-19 17:14:47.610466652 +0000 UTC m=+21.182573763"
	Oct 19 17:14:52 old-k8s-version-904967 kubelet[730]: I1019 17:14:52.055448     730 scope.go:117] "RemoveContainer" containerID="c3da2a5bbdfc05533b919ca0a4ed929aaa6e3a4bb594c8189a7418a495b5b529"
	Oct 19 17:14:52 old-k8s-version-904967 kubelet[730]: E1019 17:14:52.055717     730 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-7fw6d_kubernetes-dashboard(93067392-e0af-4f62-9b02-cbe31f8c0617)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fw6d" podUID="93067392-e0af-4f62-9b02-cbe31f8c0617"
	Oct 19 17:15:00 old-k8s-version-904967 kubelet[730]: I1019 17:15:00.635678     730 scope.go:117] "RemoveContainer" containerID="f5b580c231276ddf60d434e3d348c303152e46cc277722125030d8e76cb3335e"
	Oct 19 17:15:06 old-k8s-version-904967 kubelet[730]: I1019 17:15:06.525251     730 scope.go:117] "RemoveContainer" containerID="c3da2a5bbdfc05533b919ca0a4ed929aaa6e3a4bb594c8189a7418a495b5b529"
	Oct 19 17:15:06 old-k8s-version-904967 kubelet[730]: I1019 17:15:06.659056     730 scope.go:117] "RemoveContainer" containerID="c3da2a5bbdfc05533b919ca0a4ed929aaa6e3a4bb594c8189a7418a495b5b529"
	Oct 19 17:15:06 old-k8s-version-904967 kubelet[730]: I1019 17:15:06.659379     730 scope.go:117] "RemoveContainer" containerID="d09ce49842899f8553d55483ba7991569651a6a48f0c338ad78e1055a5625a3d"
	Oct 19 17:15:06 old-k8s-version-904967 kubelet[730]: E1019 17:15:06.659840     730 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-7fw6d_kubernetes-dashboard(93067392-e0af-4f62-9b02-cbe31f8c0617)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fw6d" podUID="93067392-e0af-4f62-9b02-cbe31f8c0617"
	Oct 19 17:15:12 old-k8s-version-904967 kubelet[730]: I1019 17:15:12.055515     730 scope.go:117] "RemoveContainer" containerID="d09ce49842899f8553d55483ba7991569651a6a48f0c338ad78e1055a5625a3d"
	Oct 19 17:15:12 old-k8s-version-904967 kubelet[730]: E1019 17:15:12.056410     730 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-7fw6d_kubernetes-dashboard(93067392-e0af-4f62-9b02-cbe31f8c0617)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fw6d" podUID="93067392-e0af-4f62-9b02-cbe31f8c0617"
	Oct 19 17:15:23 old-k8s-version-904967 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 19 17:15:23 old-k8s-version-904967 kubelet[730]: I1019 17:15:23.131348     730 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 19 17:15:23 old-k8s-version-904967 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 19 17:15:23 old-k8s-version-904967 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 19 17:15:23 old-k8s-version-904967 systemd[1]: kubelet.service: Consumed 1.716s CPU time.
	
	
	==> kubernetes-dashboard [1440d21cef285c712b1fd8cf829a2eb24f00c65d5e80452b50e3a10b8d8f3aa5] <==
	2025/10/19 17:14:47 Starting overwatch
	2025/10/19 17:14:47 Using namespace: kubernetes-dashboard
	2025/10/19 17:14:47 Using in-cluster config to connect to apiserver
	2025/10/19 17:14:47 Using secret token for csrf signing
	2025/10/19 17:14:47 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/19 17:14:47 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/19 17:14:47 Successful initial request to the apiserver, version: v1.28.0
	2025/10/19 17:14:47 Generating JWE encryption key
	2025/10/19 17:14:47 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/19 17:14:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/19 17:14:47 Initializing JWE encryption key from synchronized object
	2025/10/19 17:14:47 Creating in-cluster Sidecar client
	2025/10/19 17:14:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/19 17:14:47 Serving insecurely on HTTP port: 9090
	2025/10/19 17:15:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [6977bb31ffcd6d22facb3755db7eb620c00759ea8377876599a469f0fa5f01e1] <==
	I1019 17:15:00.706598       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1019 17:15:00.720440       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1019 17:15:00.720531       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1019 17:15:18.212207       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1019 17:15:18.212379       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-904967_61b7fb93-038d-44f9-9998-64c93137ba96!
	I1019 17:15:18.212345       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1eb69d94-a491-4ab8-b2b9-5d7636ed3c57", APIVersion:"v1", ResourceVersion:"631", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-904967_61b7fb93-038d-44f9-9998-64c93137ba96 became leader
	I1019 17:15:18.313242       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-904967_61b7fb93-038d-44f9-9998-64c93137ba96!
	
	
	==> storage-provisioner [f5b580c231276ddf60d434e3d348c303152e46cc277722125030d8e76cb3335e] <==
	I1019 17:14:29.934273       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1019 17:14:59.936585       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-904967 -n old-k8s-version-904967
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-904967 -n old-k8s-version-904967: exit status 2 (343.164396ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-904967 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (5.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-806996 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-806996 --alsologtostderr -v=1: exit status 80 (2.405983061s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-806996 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 17:15:49.681546  259662 out.go:360] Setting OutFile to fd 1 ...
	I1019 17:15:49.681914  259662 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:15:49.681928  259662 out.go:374] Setting ErrFile to fd 2...
	I1019 17:15:49.681934  259662 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:15:49.682272  259662 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3731/.minikube/bin
	I1019 17:15:49.682606  259662 out.go:368] Setting JSON to false
	I1019 17:15:49.682656  259662 mustload.go:66] Loading cluster: no-preload-806996
	I1019 17:15:49.683124  259662 config.go:182] Loaded profile config "no-preload-806996": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:15:49.683740  259662 cli_runner.go:164] Run: docker container inspect no-preload-806996 --format={{.State.Status}}
	I1019 17:15:49.705366  259662 host.go:66] Checking if "no-preload-806996" exists ...
	I1019 17:15:49.705720  259662 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:15:49.776983  259662 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:86 SystemTime:2025-10-19 17:15:49.767017492 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 17:15:49.777849  259662 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-806996 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1019 17:15:49.779757  259662 out.go:179] * Pausing node no-preload-806996 ... 
	I1019 17:15:49.780975  259662 host.go:66] Checking if "no-preload-806996" exists ...
	I1019 17:15:49.781254  259662 ssh_runner.go:195] Run: systemctl --version
	I1019 17:15:49.781310  259662 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-806996
	I1019 17:15:49.801995  259662 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/no-preload-806996/id_rsa Username:docker}
	I1019 17:15:49.902577  259662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:15:49.918100  259662 pause.go:52] kubelet running: true
	I1019 17:15:49.918179  259662 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 17:15:50.098383  259662 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 17:15:50.098466  259662 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 17:15:50.182979  259662 cri.go:89] found id: "382f76d5c0a2c6c8532920ea02a4812107d5461dd13a4f6d3d05edeadc2d5db6"
	I1019 17:15:50.183009  259662 cri.go:89] found id: "c4ea4d266cd9b9e2ae1a1e77308823cee3564f252f0fe2fd4ee039dde6dedd7a"
	I1019 17:15:50.183015  259662 cri.go:89] found id: "8d4d5fee23b457fb8794d8484d04e1e2bd58f052ff7b005cbc54b4452aacbedf"
	I1019 17:15:50.183020  259662 cri.go:89] found id: "47a6f8337391e92a1eb11ec931d4dcb05e8cc253a9eff0440440e61a960f5336"
	I1019 17:15:50.183023  259662 cri.go:89] found id: "b6a81c8dbabbdc0f923d7667d7492b6e13121a27eaf4b8c9c2155e06d06dda4c"
	I1019 17:15:50.183028  259662 cri.go:89] found id: "57798f07866c641800ed16ace6a8acd5b23639cda988891b8373c1b5db7e8dca"
	I1019 17:15:50.183032  259662 cri.go:89] found id: "fce011c2a0450511fcc8dd7c1c20bab17cded7471868ef01ff9f8bd81c4e288b"
	I1019 17:15:50.183036  259662 cri.go:89] found id: "bc11f1b63d4f685d90c3f222bd54906e082991e7bbcad2b179d7e8a591d49f53"
	I1019 17:15:50.183040  259662 cri.go:89] found id: "59114e638c3e345b69d996de509f78fbdb413e207c5f1f5aaa29fa9072561ec7"
	I1019 17:15:50.183056  259662 cri.go:89] found id: "e06905acc29723550f6e08a642ca693a6217ddf074925f5b791b25019e9c975c"
	I1019 17:15:50.183061  259662 cri.go:89] found id: "0a2f39ea915fe96227067df463e324062c88165d1d33630023a26f599191e95e"
	I1019 17:15:50.183094  259662 cri.go:89] found id: ""
	I1019 17:15:50.183150  259662 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 17:15:50.195475  259662 retry.go:31] will retry after 224.373449ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:15:50Z" level=error msg="open /run/runc: no such file or directory"
	I1019 17:15:50.420982  259662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:15:50.444178  259662 pause.go:52] kubelet running: false
	I1019 17:15:50.444243  259662 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 17:15:50.623033  259662 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 17:15:50.623149  259662 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 17:15:50.706823  259662 cri.go:89] found id: "382f76d5c0a2c6c8532920ea02a4812107d5461dd13a4f6d3d05edeadc2d5db6"
	I1019 17:15:50.706843  259662 cri.go:89] found id: "c4ea4d266cd9b9e2ae1a1e77308823cee3564f252f0fe2fd4ee039dde6dedd7a"
	I1019 17:15:50.706847  259662 cri.go:89] found id: "8d4d5fee23b457fb8794d8484d04e1e2bd58f052ff7b005cbc54b4452aacbedf"
	I1019 17:15:50.706850  259662 cri.go:89] found id: "47a6f8337391e92a1eb11ec931d4dcb05e8cc253a9eff0440440e61a960f5336"
	I1019 17:15:50.706853  259662 cri.go:89] found id: "b6a81c8dbabbdc0f923d7667d7492b6e13121a27eaf4b8c9c2155e06d06dda4c"
	I1019 17:15:50.706857  259662 cri.go:89] found id: "57798f07866c641800ed16ace6a8acd5b23639cda988891b8373c1b5db7e8dca"
	I1019 17:15:50.706861  259662 cri.go:89] found id: "fce011c2a0450511fcc8dd7c1c20bab17cded7471868ef01ff9f8bd81c4e288b"
	I1019 17:15:50.706865  259662 cri.go:89] found id: "bc11f1b63d4f685d90c3f222bd54906e082991e7bbcad2b179d7e8a591d49f53"
	I1019 17:15:50.706869  259662 cri.go:89] found id: "59114e638c3e345b69d996de509f78fbdb413e207c5f1f5aaa29fa9072561ec7"
	I1019 17:15:50.706876  259662 cri.go:89] found id: "e06905acc29723550f6e08a642ca693a6217ddf074925f5b791b25019e9c975c"
	I1019 17:15:50.706880  259662 cri.go:89] found id: "0a2f39ea915fe96227067df463e324062c88165d1d33630023a26f599191e95e"
	I1019 17:15:50.706884  259662 cri.go:89] found id: ""
	I1019 17:15:50.706931  259662 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 17:15:50.719188  259662 retry.go:31] will retry after 363.270481ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:15:50Z" level=error msg="open /run/runc: no such file or directory"
	I1019 17:15:51.082743  259662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:15:51.097198  259662 pause.go:52] kubelet running: false
	I1019 17:15:51.097249  259662 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 17:15:51.240870  259662 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 17:15:51.240975  259662 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 17:15:51.308481  259662 cri.go:89] found id: "382f76d5c0a2c6c8532920ea02a4812107d5461dd13a4f6d3d05edeadc2d5db6"
	I1019 17:15:51.308504  259662 cri.go:89] found id: "c4ea4d266cd9b9e2ae1a1e77308823cee3564f252f0fe2fd4ee039dde6dedd7a"
	I1019 17:15:51.308510  259662 cri.go:89] found id: "8d4d5fee23b457fb8794d8484d04e1e2bd58f052ff7b005cbc54b4452aacbedf"
	I1019 17:15:51.308515  259662 cri.go:89] found id: "47a6f8337391e92a1eb11ec931d4dcb05e8cc253a9eff0440440e61a960f5336"
	I1019 17:15:51.308519  259662 cri.go:89] found id: "b6a81c8dbabbdc0f923d7667d7492b6e13121a27eaf4b8c9c2155e06d06dda4c"
	I1019 17:15:51.308524  259662 cri.go:89] found id: "57798f07866c641800ed16ace6a8acd5b23639cda988891b8373c1b5db7e8dca"
	I1019 17:15:51.308529  259662 cri.go:89] found id: "fce011c2a0450511fcc8dd7c1c20bab17cded7471868ef01ff9f8bd81c4e288b"
	I1019 17:15:51.308533  259662 cri.go:89] found id: "bc11f1b63d4f685d90c3f222bd54906e082991e7bbcad2b179d7e8a591d49f53"
	I1019 17:15:51.308537  259662 cri.go:89] found id: "59114e638c3e345b69d996de509f78fbdb413e207c5f1f5aaa29fa9072561ec7"
	I1019 17:15:51.308545  259662 cri.go:89] found id: "e06905acc29723550f6e08a642ca693a6217ddf074925f5b791b25019e9c975c"
	I1019 17:15:51.308549  259662 cri.go:89] found id: "0a2f39ea915fe96227067df463e324062c88165d1d33630023a26f599191e95e"
	I1019 17:15:51.308553  259662 cri.go:89] found id: ""
	I1019 17:15:51.308600  259662 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 17:15:51.320627  259662 retry.go:31] will retry after 417.869275ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:15:51Z" level=error msg="open /run/runc: no such file or directory"
	I1019 17:15:51.739300  259662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:15:51.754950  259662 pause.go:52] kubelet running: false
	I1019 17:15:51.755015  259662 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 17:15:51.927925  259662 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 17:15:51.928006  259662 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 17:15:52.007969  259662 cri.go:89] found id: "382f76d5c0a2c6c8532920ea02a4812107d5461dd13a4f6d3d05edeadc2d5db6"
	I1019 17:15:52.007996  259662 cri.go:89] found id: "c4ea4d266cd9b9e2ae1a1e77308823cee3564f252f0fe2fd4ee039dde6dedd7a"
	I1019 17:15:52.008008  259662 cri.go:89] found id: "8d4d5fee23b457fb8794d8484d04e1e2bd58f052ff7b005cbc54b4452aacbedf"
	I1019 17:15:52.008013  259662 cri.go:89] found id: "47a6f8337391e92a1eb11ec931d4dcb05e8cc253a9eff0440440e61a960f5336"
	I1019 17:15:52.008016  259662 cri.go:89] found id: "b6a81c8dbabbdc0f923d7667d7492b6e13121a27eaf4b8c9c2155e06d06dda4c"
	I1019 17:15:52.008019  259662 cri.go:89] found id: "57798f07866c641800ed16ace6a8acd5b23639cda988891b8373c1b5db7e8dca"
	I1019 17:15:52.008022  259662 cri.go:89] found id: "fce011c2a0450511fcc8dd7c1c20bab17cded7471868ef01ff9f8bd81c4e288b"
	I1019 17:15:52.008024  259662 cri.go:89] found id: "bc11f1b63d4f685d90c3f222bd54906e082991e7bbcad2b179d7e8a591d49f53"
	I1019 17:15:52.008026  259662 cri.go:89] found id: "59114e638c3e345b69d996de509f78fbdb413e207c5f1f5aaa29fa9072561ec7"
	I1019 17:15:52.008032  259662 cri.go:89] found id: "e06905acc29723550f6e08a642ca693a6217ddf074925f5b791b25019e9c975c"
	I1019 17:15:52.008034  259662 cri.go:89] found id: "0a2f39ea915fe96227067df463e324062c88165d1d33630023a26f599191e95e"
	I1019 17:15:52.008036  259662 cri.go:89] found id: ""
	I1019 17:15:52.008091  259662 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 17:15:52.025289  259662 out.go:203] 
	W1019 17:15:52.026698  259662 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:15:52Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:15:52Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 17:15:52.026747  259662 out.go:285] * 
	* 
	W1019 17:15:52.031505  259662 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 17:15:52.032798  259662 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-806996 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-806996
helpers_test.go:243: (dbg) docker inspect no-preload-806996:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2bbe9c0feed5aedbcb3c0e82084e03391100394ba6f66d3f42cedc2d4c4a5365",
	        "Created": "2025-10-19T17:13:34.261937795Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 246103,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T17:14:53.323922428Z",
	            "FinishedAt": "2025-10-19T17:14:51.990160047Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/2bbe9c0feed5aedbcb3c0e82084e03391100394ba6f66d3f42cedc2d4c4a5365/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2bbe9c0feed5aedbcb3c0e82084e03391100394ba6f66d3f42cedc2d4c4a5365/hostname",
	        "HostsPath": "/var/lib/docker/containers/2bbe9c0feed5aedbcb3c0e82084e03391100394ba6f66d3f42cedc2d4c4a5365/hosts",
	        "LogPath": "/var/lib/docker/containers/2bbe9c0feed5aedbcb3c0e82084e03391100394ba6f66d3f42cedc2d4c4a5365/2bbe9c0feed5aedbcb3c0e82084e03391100394ba6f66d3f42cedc2d4c4a5365-json.log",
	        "Name": "/no-preload-806996",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-806996:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-806996",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2bbe9c0feed5aedbcb3c0e82084e03391100394ba6f66d3f42cedc2d4c4a5365",
	                "LowerDir": "/var/lib/docker/overlay2/6fc43257768a3bf4fe5dabf66ba1cda632762e15d5c29b3c95b7c6c08c654924-init/diff:/var/lib/docker/overlay2/0516a6de822f96a429e000bb6523437ca93a66a6aecaba561c6abf45d0d1defe/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6fc43257768a3bf4fe5dabf66ba1cda632762e15d5c29b3c95b7c6c08c654924/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6fc43257768a3bf4fe5dabf66ba1cda632762e15d5c29b3c95b7c6c08c654924/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6fc43257768a3bf4fe5dabf66ba1cda632762e15d5c29b3c95b7c6c08c654924/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-806996",
	                "Source": "/var/lib/docker/volumes/no-preload-806996/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-806996",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-806996",
	                "name.minikube.sigs.k8s.io": "no-preload-806996",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "742448deeade6bba535c1fcfd233fb079772c1b79fa293486310ab459cdb23cc",
	            "SandboxKey": "/var/run/docker/netns/742448deeade",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-806996": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fe:69:aa:d2:a9:a2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "73bac96357aad3b7cfe938f1f5873c93097c59bb8fc57dcc5d67449be0149246",
	                    "EndpointID": "3505c79141851bebc12827e25b33999c4efccd568095e197732fdd71bb9fd76c",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-806996",
	                        "2bbe9c0feed5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-806996 -n no-preload-806996
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-806996 -n no-preload-806996: exit status 2 (316.973173ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-806996 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-806996 logs -n 25: (1.225013172s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p stopped-upgrade-659566 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                      │ stopped-upgrade-659566       │ jenkins │ v1.37.0 │ 19 Oct 25 17:12 UTC │ 19 Oct 25 17:12 UTC │
	│ start   │ -p missing-upgrade-447724 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                      │ missing-upgrade-447724       │ jenkins │ v1.37.0 │ 19 Oct 25 17:12 UTC │ 19 Oct 25 17:13 UTC │
	│ delete  │ -p stopped-upgrade-659566                                                                                                                                                                                                                     │ stopped-upgrade-659566       │ jenkins │ v1.37.0 │ 19 Oct 25 17:12 UTC │ 19 Oct 25 17:13 UTC │
	│ start   │ -p old-k8s-version-904967 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-904967       │ jenkins │ v1.37.0 │ 19 Oct 25 17:13 UTC │ 19 Oct 25 17:13 UTC │
	│ delete  │ -p missing-upgrade-447724                                                                                                                                                                                                                     │ missing-upgrade-447724       │ jenkins │ v1.37.0 │ 19 Oct 25 17:13 UTC │ 19 Oct 25 17:13 UTC │
	│ start   │ -p no-preload-806996 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-806996            │ jenkins │ v1.37.0 │ 19 Oct 25 17:13 UTC │ 19 Oct 25 17:14 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-904967 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-904967       │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │                     │
	│ stop    │ -p old-k8s-version-904967 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-904967       │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │ 19 Oct 25 17:14 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-904967 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-904967       │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │ 19 Oct 25 17:14 UTC │
	│ start   │ -p old-k8s-version-904967 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-904967       │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │ 19 Oct 25 17:15 UTC │
	│ addons  │ enable metrics-server -p no-preload-806996 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-806996            │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │                     │
	│ stop    │ -p no-preload-806996 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-806996            │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │ 19 Oct 25 17:14 UTC │
	│ addons  │ enable dashboard -p no-preload-806996 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-806996            │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │ 19 Oct 25 17:14 UTC │
	│ start   │ -p no-preload-806996 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-806996            │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │ 19 Oct 25 17:15 UTC │
	│ start   │ -p cert-expiration-132648 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-132648       │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ delete  │ -p cert-expiration-132648                                                                                                                                                                                                                     │ cert-expiration-132648       │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ start   │ -p embed-certs-090139 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-090139           │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │                     │
	│ image   │ old-k8s-version-904967 image list --format=json                                                                                                                                                                                               │ old-k8s-version-904967       │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ pause   │ -p old-k8s-version-904967 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-904967       │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │                     │
	│ delete  │ -p old-k8s-version-904967                                                                                                                                                                                                                     │ old-k8s-version-904967       │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ delete  │ -p old-k8s-version-904967                                                                                                                                                                                                                     │ old-k8s-version-904967       │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ delete  │ -p disable-driver-mounts-858297                                                                                                                                                                                                               │ disable-driver-mounts-858297 │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ start   │ -p default-k8s-diff-port-663015 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-663015 │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │                     │
	│ image   │ no-preload-806996 image list --format=json                                                                                                                                                                                                    │ no-preload-806996            │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ pause   │ -p no-preload-806996 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-806996            │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 17:15:31
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 17:15:31.903216  256207 out.go:360] Setting OutFile to fd 1 ...
	I1019 17:15:31.903517  256207 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:15:31.903527  256207 out.go:374] Setting ErrFile to fd 2...
	I1019 17:15:31.903532  256207 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:15:31.903794  256207 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3731/.minikube/bin
	I1019 17:15:31.904315  256207 out.go:368] Setting JSON to false
	I1019 17:15:31.905475  256207 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3478,"bootTime":1760890654,"procs":348,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 17:15:31.905563  256207 start.go:143] virtualization: kvm guest
	I1019 17:15:31.907811  256207 out.go:179] * [default-k8s-diff-port-663015] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 17:15:31.909608  256207 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 17:15:31.909635  256207 notify.go:221] Checking for updates...
	I1019 17:15:31.912686  256207 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 17:15:31.914136  256207 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-3731/kubeconfig
	I1019 17:15:31.915707  256207 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-3731/.minikube
	I1019 17:15:31.917001  256207 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 17:15:31.918498  256207 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 17:15:31.920662  256207 config.go:182] Loaded profile config "embed-certs-090139": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:15:31.920791  256207 config.go:182] Loaded profile config "kubernetes-upgrade-318879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:15:31.920926  256207 config.go:182] Loaded profile config "no-preload-806996": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:15:31.921038  256207 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 17:15:31.952135  256207 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1019 17:15:31.952246  256207 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:15:32.037820  256207 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-19 17:15:32.02422055 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 17:15:32.037984  256207 docker.go:319] overlay module found
	I1019 17:15:32.043207  256207 out.go:179] * Using the docker driver based on user configuration
	I1019 17:15:32.044705  256207 start.go:309] selected driver: docker
	I1019 17:15:32.044734  256207 start.go:930] validating driver "docker" against <nil>
	I1019 17:15:32.044748  256207 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 17:15:32.045394  256207 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:15:32.126872  256207 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-19 17:15:32.112012764 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 17:15:32.127191  256207 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1019 17:15:32.127427  256207 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 17:15:32.129760  256207 out.go:179] * Using Docker driver with root privileges
	I1019 17:15:32.131199  256207 cni.go:84] Creating CNI manager for ""
	I1019 17:15:32.131280  256207 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:15:32.131297  256207 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1019 17:15:32.131405  256207 start.go:353] cluster config:
	{Name:default-k8s-diff-port-663015 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-663015 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:15:32.132908  256207 out.go:179] * Starting "default-k8s-diff-port-663015" primary control-plane node in "default-k8s-diff-port-663015" cluster
	I1019 17:15:32.134080  256207 cache.go:124] Beginning downloading kic base image for docker with crio
	I1019 17:15:32.135380  256207 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 17:15:32.136715  256207 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:15:32.136756  256207 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-3731/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1019 17:15:32.136766  256207 cache.go:59] Caching tarball of preloaded images
	I1019 17:15:32.136777  256207 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 17:15:32.136869  256207 preload.go:233] Found /home/jenkins/minikube-integration/21683-3731/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1019 17:15:32.136879  256207 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 17:15:32.137002  256207 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/config.json ...
	I1019 17:15:32.137034  256207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/config.json: {Name:mke61c039aa897d9a6dfc418982e7062d2453437 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:15:32.165008  256207 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 17:15:32.165038  256207 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 17:15:32.165059  256207 cache.go:233] Successfully downloaded all kic artifacts
	I1019 17:15:32.165104  256207 start.go:360] acquireMachinesLock for default-k8s-diff-port-663015: {Name:mkc3b977c4f353256fa3816417a52809b235a030 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 17:15:32.165230  256207 start.go:364] duration metric: took 101.636µs to acquireMachinesLock for "default-k8s-diff-port-663015"
	I1019 17:15:32.165263  256207 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-663015 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-663015 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 17:15:32.165347  256207 start.go:125] createHost starting for "" (driver="docker")
	W1019 17:15:28.955363  245899 pod_ready.go:104] pod "coredns-66bc5c9577-s4dxw" is not "Ready", error: <nil>
	W1019 17:15:31.454700  245899 pod_ready.go:104] pod "coredns-66bc5c9577-s4dxw" is not "Ready", error: <nil>
	I1019 17:15:28.755498  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:15:28.755944  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1019 17:15:28.756020  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1019 17:15:28.756098  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1019 17:15:28.788829  219832 cri.go:89] found id: "9069a5be85580c182da77b88270f9798850d1b399012f0273ac8911e2aca89dc"
	I1019 17:15:28.788858  219832 cri.go:89] found id: ""
	I1019 17:15:28.788868  219832 logs.go:282] 1 containers: [9069a5be85580c182da77b88270f9798850d1b399012f0273ac8911e2aca89dc]
	I1019 17:15:28.788930  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:15:28.793494  219832 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1019 17:15:28.793572  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1019 17:15:28.823750  219832 cri.go:89] found id: ""
	I1019 17:15:28.823778  219832 logs.go:282] 0 containers: []
	W1019 17:15:28.823788  219832 logs.go:284] No container was found matching "etcd"
	I1019 17:15:28.823795  219832 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1019 17:15:28.823851  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1019 17:15:28.856427  219832 cri.go:89] found id: ""
	I1019 17:15:28.856451  219832 logs.go:282] 0 containers: []
	W1019 17:15:28.856462  219832 logs.go:284] No container was found matching "coredns"
	I1019 17:15:28.856469  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1019 17:15:28.856525  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1019 17:15:28.888398  219832 cri.go:89] found id: "409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:15:28.888425  219832 cri.go:89] found id: ""
	I1019 17:15:28.888435  219832 logs.go:282] 1 containers: [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f]
	I1019 17:15:28.888494  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:15:28.892982  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1019 17:15:28.893058  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1019 17:15:28.922468  219832 cri.go:89] found id: ""
	I1019 17:15:28.922497  219832 logs.go:282] 0 containers: []
	W1019 17:15:28.922507  219832 logs.go:284] No container was found matching "kube-proxy"
	I1019 17:15:28.922517  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1019 17:15:28.922569  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1019 17:15:28.952452  219832 cri.go:89] found id: "fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa"
	I1019 17:15:28.952477  219832 cri.go:89] found id: ""
	I1019 17:15:28.952487  219832 logs.go:282] 1 containers: [fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa]
	I1019 17:15:28.952559  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:15:28.957289  219832 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1019 17:15:28.957360  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1019 17:15:28.986202  219832 cri.go:89] found id: ""
	I1019 17:15:28.986229  219832 logs.go:282] 0 containers: []
	W1019 17:15:28.986240  219832 logs.go:284] No container was found matching "kindnet"
	I1019 17:15:28.986247  219832 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1019 17:15:28.986302  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1019 17:15:29.015162  219832 cri.go:89] found id: ""
	I1019 17:15:29.015190  219832 logs.go:282] 0 containers: []
	W1019 17:15:29.015201  219832 logs.go:284] No container was found matching "storage-provisioner"
	I1019 17:15:29.015211  219832 logs.go:123] Gathering logs for describe nodes ...
	I1019 17:15:29.015225  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1019 17:15:29.091334  219832 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1019 17:15:29.091517  219832 logs.go:123] Gathering logs for kube-apiserver [9069a5be85580c182da77b88270f9798850d1b399012f0273ac8911e2aca89dc] ...
	I1019 17:15:29.091555  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9069a5be85580c182da77b88270f9798850d1b399012f0273ac8911e2aca89dc"
	I1019 17:15:29.135520  219832 logs.go:123] Gathering logs for kube-scheduler [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f] ...
	I1019 17:15:29.135583  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:15:29.207086  219832 logs.go:123] Gathering logs for kube-controller-manager [fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa] ...
	I1019 17:15:29.207124  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa"
	I1019 17:15:29.237568  219832 logs.go:123] Gathering logs for CRI-O ...
	I1019 17:15:29.237602  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1019 17:15:29.287158  219832 logs.go:123] Gathering logs for container status ...
	I1019 17:15:29.287199  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1019 17:15:29.323049  219832 logs.go:123] Gathering logs for kubelet ...
	I1019 17:15:29.323103  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1019 17:15:29.414864  219832 logs.go:123] Gathering logs for dmesg ...
	I1019 17:15:29.414904  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1019 17:15:31.932122  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:15:31.932542  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1019 17:15:31.932604  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1019 17:15:31.932662  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1019 17:15:31.970732  219832 cri.go:89] found id: "9069a5be85580c182da77b88270f9798850d1b399012f0273ac8911e2aca89dc"
	I1019 17:15:31.970755  219832 cri.go:89] found id: ""
	I1019 17:15:31.970763  219832 logs.go:282] 1 containers: [9069a5be85580c182da77b88270f9798850d1b399012f0273ac8911e2aca89dc]
	I1019 17:15:31.970819  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:15:31.979248  219832 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1019 17:15:31.979322  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1019 17:15:32.020265  219832 cri.go:89] found id: ""
	I1019 17:15:32.020295  219832 logs.go:282] 0 containers: []
	W1019 17:15:32.020306  219832 logs.go:284] No container was found matching "etcd"
	I1019 17:15:32.020313  219832 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1019 17:15:32.020376  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1019 17:15:32.056993  219832 cri.go:89] found id: ""
	I1019 17:15:32.057021  219832 logs.go:282] 0 containers: []
	W1019 17:15:32.057033  219832 logs.go:284] No container was found matching "coredns"
	I1019 17:15:32.057040  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1019 17:15:32.057113  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1019 17:15:32.100997  219832 cri.go:89] found id: "409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:15:32.101031  219832 cri.go:89] found id: ""
	I1019 17:15:32.101042  219832 logs.go:282] 1 containers: [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f]
	I1019 17:15:32.101119  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:15:32.107278  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1019 17:15:32.107391  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1019 17:15:32.145984  219832 cri.go:89] found id: ""
	I1019 17:15:32.146011  219832 logs.go:282] 0 containers: []
	W1019 17:15:32.146024  219832 logs.go:284] No container was found matching "kube-proxy"
	I1019 17:15:32.146031  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1019 17:15:32.146104  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1019 17:15:32.181273  219832 cri.go:89] found id: "fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa"
	I1019 17:15:32.181311  219832 cri.go:89] found id: ""
	I1019 17:15:32.181321  219832 logs.go:282] 1 containers: [fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa]
	I1019 17:15:32.181378  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:15:32.186566  219832 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1019 17:15:32.186638  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1019 17:15:32.221871  219832 cri.go:89] found id: ""
	I1019 17:15:32.221970  219832 logs.go:282] 0 containers: []
	W1019 17:15:32.221987  219832 logs.go:284] No container was found matching "kindnet"
	I1019 17:15:32.221995  219832 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1019 17:15:32.222111  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1019 17:15:32.259656  219832 cri.go:89] found id: ""
	I1019 17:15:32.259684  219832 logs.go:282] 0 containers: []
	W1019 17:15:32.259694  219832 logs.go:284] No container was found matching "storage-provisioner"
	I1019 17:15:32.259704  219832 logs.go:123] Gathering logs for kube-scheduler [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f] ...
	I1019 17:15:32.259719  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:15:32.342424  219832 logs.go:123] Gathering logs for kube-controller-manager [fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa] ...
	I1019 17:15:32.342470  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa"
	I1019 17:15:32.380735  219832 logs.go:123] Gathering logs for CRI-O ...
	I1019 17:15:32.380771  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1019 17:15:32.450369  219832 logs.go:123] Gathering logs for container status ...
	I1019 17:15:32.450453  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1019 17:15:32.494816  219832 logs.go:123] Gathering logs for kubelet ...
	I1019 17:15:32.494853  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1019 17:15:32.627839  219832 logs.go:123] Gathering logs for dmesg ...
	I1019 17:15:32.627892  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1019 17:15:32.645946  219832 logs.go:123] Gathering logs for describe nodes ...
	I1019 17:15:32.645983  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1019 17:15:32.733672  219832 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1019 17:15:32.733698  219832 logs.go:123] Gathering logs for kube-apiserver [9069a5be85580c182da77b88270f9798850d1b399012f0273ac8911e2aca89dc] ...
	I1019 17:15:32.733713  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9069a5be85580c182da77b88270f9798850d1b399012f0273ac8911e2aca89dc"
	I1019 17:15:30.116754  251026 out.go:252]   - Booting up control plane ...
	I1019 17:15:30.116862  251026 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1019 17:15:30.116950  251026 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1019 17:15:30.117072  251026 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1019 17:15:30.131991  251026 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1019 17:15:30.132237  251026 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1019 17:15:30.140050  251026 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1019 17:15:30.140219  251026 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1019 17:15:30.140283  251026 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1019 17:15:30.248100  251026 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1019 17:15:30.248285  251026 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1019 17:15:30.749992  251026 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 502.099967ms
	I1019 17:15:30.752881  251026 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1019 17:15:30.753025  251026 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1019 17:15:30.753141  251026 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1019 17:15:30.753208  251026 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1019 17:15:33.012669  251026 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.257250612s
	I1019 17:15:33.146473  251026 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.39358791s
	I1019 17:15:34.754665  251026 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001688751s
	I1019 17:15:34.767969  251026 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1019 17:15:34.780120  251026 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1019 17:15:34.790181  251026 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1019 17:15:34.790477  251026 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-090139 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1019 17:15:34.799881  251026 kubeadm.go:319] [bootstrap-token] Using token: 9zgr3w.nm3btzu7j71lm9u2
	I1019 17:15:34.801746  251026 out.go:252]   - Configuring RBAC rules ...
	I1019 17:15:34.801887  251026 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1019 17:15:34.804977  251026 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1019 17:15:34.811365  251026 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1019 17:15:34.814127  251026 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1019 17:15:34.818296  251026 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1019 17:15:34.821494  251026 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1019 17:15:35.161399  251026 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1019 17:15:35.988488  251026 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1019 17:15:36.858272  251026 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1019 17:15:36.859538  251026 kubeadm.go:319] 
	I1019 17:15:36.859687  251026 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1019 17:15:36.859714  251026 kubeadm.go:319] 
	I1019 17:15:36.859813  251026 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1019 17:15:36.859825  251026 kubeadm.go:319] 
	I1019 17:15:36.859857  251026 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1019 17:15:36.859951  251026 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1019 17:15:36.860021  251026 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1019 17:15:36.860030  251026 kubeadm.go:319] 
	I1019 17:15:36.860118  251026 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1019 17:15:36.860132  251026 kubeadm.go:319] 
	I1019 17:15:36.860194  251026 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1019 17:15:36.860199  251026 kubeadm.go:319] 
	I1019 17:15:36.860265  251026 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1019 17:15:36.860361  251026 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1019 17:15:36.860465  251026 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1019 17:15:36.860481  251026 kubeadm.go:319] 
	I1019 17:15:36.860600  251026 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1019 17:15:36.860726  251026 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1019 17:15:36.860747  251026 kubeadm.go:319] 
	I1019 17:15:36.860860  251026 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 9zgr3w.nm3btzu7j71lm9u2 \
	I1019 17:15:36.861036  251026 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:861d72ec119aad1188a44160bac431ea1f9fef8d67df56e962f86d9bbf64f1d3 \
	I1019 17:15:36.861085  251026 kubeadm.go:319] 	--control-plane 
	I1019 17:15:36.861096  251026 kubeadm.go:319] 
	I1019 17:15:36.861217  251026 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1019 17:15:36.861235  251026 kubeadm.go:319] 
	I1019 17:15:36.861338  251026 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 9zgr3w.nm3btzu7j71lm9u2 \
	I1019 17:15:36.861469  251026 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:861d72ec119aad1188a44160bac431ea1f9fef8d67df56e962f86d9bbf64f1d3 
	I1019 17:15:36.864686  251026 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1019 17:15:36.864846  251026 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1019 17:15:36.864878  251026 cni.go:84] Creating CNI manager for ""
	I1019 17:15:36.864890  251026 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:15:32.170272  256207 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1019 17:15:32.170554  256207 start.go:159] libmachine.API.Create for "default-k8s-diff-port-663015" (driver="docker")
	I1019 17:15:32.170616  256207 client.go:171] LocalClient.Create starting
	I1019 17:15:32.170743  256207 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem
	I1019 17:15:32.170792  256207 main.go:143] libmachine: Decoding PEM data...
	I1019 17:15:32.170817  256207 main.go:143] libmachine: Parsing certificate...
	I1019 17:15:32.170881  256207 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-3731/.minikube/certs/cert.pem
	I1019 17:15:32.170901  256207 main.go:143] libmachine: Decoding PEM data...
	I1019 17:15:32.170912  256207 main.go:143] libmachine: Parsing certificate...
	I1019 17:15:32.171340  256207 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-663015 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1019 17:15:32.194917  256207 cli_runner.go:211] docker network inspect default-k8s-diff-port-663015 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1019 17:15:32.194994  256207 network_create.go:284] running [docker network inspect default-k8s-diff-port-663015] to gather additional debugging logs...
	I1019 17:15:32.195018  256207 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-663015
	W1019 17:15:32.220045  256207 cli_runner.go:211] docker network inspect default-k8s-diff-port-663015 returned with exit code 1
	I1019 17:15:32.220124  256207 network_create.go:287] error running [docker network inspect default-k8s-diff-port-663015]: docker network inspect default-k8s-diff-port-663015: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-663015 not found
	I1019 17:15:32.220142  256207 network_create.go:289] output of [docker network inspect default-k8s-diff-port-663015]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-663015 not found
	
	** /stderr **
	I1019 17:15:32.220265  256207 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 17:15:32.245048  256207 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-96cf7041f267 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:7a:ea:91:e3:37:25} reservation:<nil>}
	I1019 17:15:32.246145  256207 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-0f2c415cfca9 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:96:f0:8a:e9:5f:de} reservation:<nil>}
	I1019 17:15:32.247275  256207 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ca739aebb768 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:a6:81:0d:b3:5e:ec} reservation:<nil>}
	I1019 17:15:32.248100  256207 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-73bac96357aa IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:fe:58:13:5a:d3:70} reservation:<nil>}
	I1019 17:15:32.249260  256207 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f94e30}
	I1019 17:15:32.249291  256207 network_create.go:124] attempt to create docker network default-k8s-diff-port-663015 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1019 17:15:32.249353  256207 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-663015 default-k8s-diff-port-663015
	I1019 17:15:32.327302  256207 network_create.go:108] docker network default-k8s-diff-port-663015 192.168.85.0/24 created
	I1019 17:15:32.327342  256207 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-663015" container
	I1019 17:15:32.327418  256207 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1019 17:15:32.350545  256207 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-663015 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-663015 --label created_by.minikube.sigs.k8s.io=true
	I1019 17:15:32.374502  256207 oci.go:103] Successfully created a docker volume default-k8s-diff-port-663015
	I1019 17:15:32.374587  256207 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-663015-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-663015 --entrypoint /usr/bin/test -v default-k8s-diff-port-663015:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1019 17:15:32.864578  256207 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-663015
	I1019 17:15:32.864634  256207 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:15:32.864661  256207 kic.go:194] Starting extracting preloaded images to volume ...
	I1019 17:15:32.864737  256207 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-3731/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-663015:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1019 17:15:37.003206  251026 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1019 17:15:33.454957  245899 pod_ready.go:104] pod "coredns-66bc5c9577-s4dxw" is not "Ready", error: <nil>
	W1019 17:15:35.953453  245899 pod_ready.go:104] pod "coredns-66bc5c9577-s4dxw" is not "Ready", error: <nil>
	I1019 17:15:36.453745  245899 pod_ready.go:94] pod "coredns-66bc5c9577-s4dxw" is "Ready"
	I1019 17:15:36.453785  245899 pod_ready.go:86] duration metric: took 32.505068187s for pod "coredns-66bc5c9577-s4dxw" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:15:36.456235  245899 pod_ready.go:83] waiting for pod "etcd-no-preload-806996" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:15:36.459937  245899 pod_ready.go:94] pod "etcd-no-preload-806996" is "Ready"
	I1019 17:15:36.459958  245899 pod_ready.go:86] duration metric: took 3.68272ms for pod "etcd-no-preload-806996" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:15:36.461849  245899 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-806996" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:15:36.465454  245899 pod_ready.go:94] pod "kube-apiserver-no-preload-806996" is "Ready"
	I1019 17:15:36.465477  245899 pod_ready.go:86] duration metric: took 3.608848ms for pod "kube-apiserver-no-preload-806996" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:15:36.467204  245899 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-806996" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:15:36.651960  245899 pod_ready.go:94] pod "kube-controller-manager-no-preload-806996" is "Ready"
	I1019 17:15:36.651986  245899 pod_ready.go:86] duration metric: took 184.763994ms for pod "kube-controller-manager-no-preload-806996" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:15:36.852203  245899 pod_ready.go:83] waiting for pod "kube-proxy-76f5v" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:15:37.253368  245899 pod_ready.go:94] pod "kube-proxy-76f5v" is "Ready"
	I1019 17:15:37.253421  245899 pod_ready.go:86] duration metric: took 401.192762ms for pod "kube-proxy-76f5v" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:15:37.452538  245899 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-806996" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:15:37.852685  245899 pod_ready.go:94] pod "kube-scheduler-no-preload-806996" is "Ready"
	I1019 17:15:37.852710  245899 pod_ready.go:86] duration metric: took 400.146919ms for pod "kube-scheduler-no-preload-806996" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:15:37.852722  245899 pod_ready.go:40] duration metric: took 33.908905676s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 17:15:37.908548  245899 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1019 17:15:37.910333  245899 out.go:179] * Done! kubectl is now configured to use "no-preload-806996" cluster and "default" namespace by default
	I1019 17:15:35.275419  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:15:35.275887  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1019 17:15:35.275958  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1019 17:15:35.276016  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1019 17:15:35.307678  219832 cri.go:89] found id: "9069a5be85580c182da77b88270f9798850d1b399012f0273ac8911e2aca89dc"
	I1019 17:15:35.307700  219832 cri.go:89] found id: ""
	I1019 17:15:35.307708  219832 logs.go:282] 1 containers: [9069a5be85580c182da77b88270f9798850d1b399012f0273ac8911e2aca89dc]
	I1019 17:15:35.307753  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:15:35.312853  219832 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1019 17:15:35.312928  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1019 17:15:35.346037  219832 cri.go:89] found id: ""
	I1019 17:15:35.346092  219832 logs.go:282] 0 containers: []
	W1019 17:15:35.346104  219832 logs.go:284] No container was found matching "etcd"
	I1019 17:15:35.346111  219832 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1019 17:15:35.346165  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1019 17:15:35.378630  219832 cri.go:89] found id: ""
	I1019 17:15:35.378662  219832 logs.go:282] 0 containers: []
	W1019 17:15:35.378673  219832 logs.go:284] No container was found matching "coredns"
	I1019 17:15:35.378680  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1019 17:15:35.378735  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1019 17:15:35.413360  219832 cri.go:89] found id: "409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:15:35.413387  219832 cri.go:89] found id: ""
	I1019 17:15:35.413399  219832 logs.go:282] 1 containers: [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f]
	I1019 17:15:35.413457  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:15:35.418870  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1019 17:15:35.419162  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1019 17:15:35.456695  219832 cri.go:89] found id: ""
	I1019 17:15:35.456724  219832 logs.go:282] 0 containers: []
	W1019 17:15:35.456734  219832 logs.go:284] No container was found matching "kube-proxy"
	I1019 17:15:35.456742  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1019 17:15:35.456796  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1019 17:15:35.488034  219832 cri.go:89] found id: "fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa"
	I1019 17:15:35.488058  219832 cri.go:89] found id: ""
	I1019 17:15:35.488080  219832 logs.go:282] 1 containers: [fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa]
	I1019 17:15:35.488134  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:15:35.492575  219832 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1019 17:15:35.492635  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1019 17:15:35.519809  219832 cri.go:89] found id: ""
	I1019 17:15:35.519831  219832 logs.go:282] 0 containers: []
	W1019 17:15:35.519839  219832 logs.go:284] No container was found matching "kindnet"
	I1019 17:15:35.519844  219832 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1019 17:15:35.519890  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1019 17:15:35.549606  219832 cri.go:89] found id: ""
	I1019 17:15:35.549630  219832 logs.go:282] 0 containers: []
	W1019 17:15:35.549638  219832 logs.go:284] No container was found matching "storage-provisioner"
	I1019 17:15:35.549646  219832 logs.go:123] Gathering logs for kubelet ...
	I1019 17:15:35.549657  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1019 17:15:35.643806  219832 logs.go:123] Gathering logs for dmesg ...
	I1019 17:15:35.643849  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1019 17:15:35.659863  219832 logs.go:123] Gathering logs for describe nodes ...
	I1019 17:15:35.659899  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1019 17:15:35.717571  219832 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1019 17:15:35.717599  219832 logs.go:123] Gathering logs for kube-apiserver [9069a5be85580c182da77b88270f9798850d1b399012f0273ac8911e2aca89dc] ...
	I1019 17:15:35.717615  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9069a5be85580c182da77b88270f9798850d1b399012f0273ac8911e2aca89dc"
	I1019 17:15:35.750362  219832 logs.go:123] Gathering logs for kube-scheduler [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f] ...
	I1019 17:15:35.750394  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:15:35.812487  219832 logs.go:123] Gathering logs for kube-controller-manager [fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa] ...
	I1019 17:15:35.812520  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa"
	I1019 17:15:35.838694  219832 logs.go:123] Gathering logs for CRI-O ...
	I1019 17:15:35.838719  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1019 17:15:35.888325  219832 logs.go:123] Gathering logs for container status ...
	I1019 17:15:35.888359  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1019 17:15:38.421141  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:15:37.164230  251026 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1019 17:15:37.170103  251026 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1019 17:15:37.170128  251026 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1019 17:15:37.184684  251026 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1019 17:15:37.657163  251026 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1019 17:15:37.657340  251026 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:15:37.657439  251026 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-090139 minikube.k8s.io/updated_at=2025_10_19T17_15_37_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34 minikube.k8s.io/name=embed-certs-090139 minikube.k8s.io/primary=true
	I1019 17:15:37.669594  251026 ops.go:34] apiserver oom_adj: -16
	I1019 17:15:37.745746  251026 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:15:38.246757  251026 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:15:38.746298  251026 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:15:39.246272  251026 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:15:37.572436  256207 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-3731/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-663015:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.707618201s)
	I1019 17:15:37.572480  256207 kic.go:203] duration metric: took 4.707815653s to extract preloaded images to volume ...
	W1019 17:15:37.572587  256207 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1019 17:15:37.572638  256207 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1019 17:15:37.572749  256207 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1019 17:15:37.640259  256207 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-663015 --name default-k8s-diff-port-663015 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-663015 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-663015 --network default-k8s-diff-port-663015 --ip 192.168.85.2 --volume default-k8s-diff-port-663015:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1019 17:15:37.976856  256207 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-663015 --format={{.State.Running}}
	I1019 17:15:38.001144  256207 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-663015 --format={{.State.Status}}
	I1019 17:15:38.024268  256207 cli_runner.go:164] Run: docker exec default-k8s-diff-port-663015 stat /var/lib/dpkg/alternatives/iptables
	I1019 17:15:38.075095  256207 oci.go:144] the created container "default-k8s-diff-port-663015" has a running status.
	I1019 17:15:38.075129  256207 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-3731/.minikube/machines/default-k8s-diff-port-663015/id_rsa...
	I1019 17:15:38.375731  256207 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-3731/.minikube/machines/default-k8s-diff-port-663015/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1019 17:15:38.406364  256207 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-663015 --format={{.State.Status}}
	I1019 17:15:38.427531  256207 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1019 17:15:38.427554  256207 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-663015 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1019 17:15:38.478828  256207 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-663015 --format={{.State.Status}}
	I1019 17:15:38.498417  256207 machine.go:94] provisionDockerMachine start ...
	I1019 17:15:38.498538  256207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-663015
	I1019 17:15:38.517560  256207 main.go:143] libmachine: Using SSH client type: native
	I1019 17:15:38.517801  256207 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33079 <nil> <nil>}
	I1019 17:15:38.517813  256207 main.go:143] libmachine: About to run SSH command:
	hostname
	I1019 17:15:38.654113  256207 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-663015
	
	I1019 17:15:38.654142  256207 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-663015"
	I1019 17:15:38.654206  256207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-663015
	I1019 17:15:38.675521  256207 main.go:143] libmachine: Using SSH client type: native
	I1019 17:15:38.675839  256207 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33079 <nil> <nil>}
	I1019 17:15:38.675862  256207 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-663015 && echo "default-k8s-diff-port-663015" | sudo tee /etc/hostname
	I1019 17:15:38.825226  256207 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-663015
	
	I1019 17:15:38.825289  256207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-663015
	I1019 17:15:38.843802  256207 main.go:143] libmachine: Using SSH client type: native
	I1019 17:15:38.844010  256207 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33079 <nil> <nil>}
	I1019 17:15:38.844030  256207 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-663015' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-663015/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-663015' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 17:15:38.979170  256207 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1019 17:15:38.979205  256207 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-3731/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-3731/.minikube}
	I1019 17:15:38.979244  256207 ubuntu.go:190] setting up certificates
	I1019 17:15:38.979256  256207 provision.go:84] configureAuth start
	I1019 17:15:38.979315  256207 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-663015
	I1019 17:15:38.997141  256207 provision.go:143] copyHostCerts
	I1019 17:15:38.997221  256207 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-3731/.minikube/ca.pem, removing ...
	I1019 17:15:38.997236  256207 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-3731/.minikube/ca.pem
	I1019 17:15:38.997309  256207 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-3731/.minikube/ca.pem (1082 bytes)
	I1019 17:15:38.997392  256207 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-3731/.minikube/cert.pem, removing ...
	I1019 17:15:38.997401  256207 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-3731/.minikube/cert.pem
	I1019 17:15:38.997428  256207 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-3731/.minikube/cert.pem (1123 bytes)
	I1019 17:15:38.997483  256207 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-3731/.minikube/key.pem, removing ...
	I1019 17:15:38.997490  256207 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-3731/.minikube/key.pem
	I1019 17:15:38.997520  256207 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-3731/.minikube/key.pem (1679 bytes)
	I1019 17:15:38.997569  256207 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-3731/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-663015 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-663015 localhost minikube]
	I1019 17:15:39.115013  256207 provision.go:177] copyRemoteCerts
	I1019 17:15:39.115087  256207 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 17:15:39.115123  256207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-663015
	I1019 17:15:39.134554  256207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33079 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/default-k8s-diff-port-663015/id_rsa Username:docker}
	I1019 17:15:39.232716  256207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 17:15:39.253526  256207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1019 17:15:39.273304  256207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 17:15:39.292760  256207 provision.go:87] duration metric: took 313.488795ms to configureAuth
	I1019 17:15:39.292789  256207 ubuntu.go:206] setting minikube options for container-runtime
	I1019 17:15:39.292957  256207 config.go:182] Loaded profile config "default-k8s-diff-port-663015": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:15:39.293056  256207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-663015
	I1019 17:15:39.314595  256207 main.go:143] libmachine: Using SSH client type: native
	I1019 17:15:39.314929  256207 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33079 <nil> <nil>}
	I1019 17:15:39.314960  256207 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 17:15:39.565321  256207 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 17:15:39.565346  256207 machine.go:97] duration metric: took 1.06690265s to provisionDockerMachine
	I1019 17:15:39.565359  256207 client.go:174] duration metric: took 7.394730229s to LocalClient.Create
	I1019 17:15:39.565373  256207 start.go:167] duration metric: took 7.394822286s to libmachine.API.Create "default-k8s-diff-port-663015"
	I1019 17:15:39.565382  256207 start.go:293] postStartSetup for "default-k8s-diff-port-663015" (driver="docker")
	I1019 17:15:39.565395  256207 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 17:15:39.565457  256207 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 17:15:39.565504  256207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-663015
	I1019 17:15:39.587442  256207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33079 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/default-k8s-diff-port-663015/id_rsa Username:docker}
	I1019 17:15:39.687547  256207 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 17:15:39.691397  256207 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 17:15:39.691424  256207 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 17:15:39.691435  256207 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-3731/.minikube/addons for local assets ...
	I1019 17:15:39.691486  256207 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-3731/.minikube/files for local assets ...
	I1019 17:15:39.691569  256207 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-3731/.minikube/files/etc/ssl/certs/72282.pem -> 72282.pem in /etc/ssl/certs
	I1019 17:15:39.691660  256207 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 17:15:39.699588  256207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/files/etc/ssl/certs/72282.pem --> /etc/ssl/certs/72282.pem (1708 bytes)
	I1019 17:15:39.721659  256207 start.go:296] duration metric: took 156.259902ms for postStartSetup
	I1019 17:15:39.722148  256207 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-663015
	I1019 17:15:39.740993  256207 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/config.json ...
	I1019 17:15:39.741315  256207 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 17:15:39.741365  256207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-663015
	I1019 17:15:39.760154  256207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33079 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/default-k8s-diff-port-663015/id_rsa Username:docker}
	I1019 17:15:39.858550  256207 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 17:15:39.863203  256207 start.go:128] duration metric: took 7.697842458s to createHost
	I1019 17:15:39.863230  256207 start.go:83] releasing machines lock for "default-k8s-diff-port-663015", held for 7.697984929s
	I1019 17:15:39.863301  256207 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-663015
	I1019 17:15:39.882426  256207 ssh_runner.go:195] Run: cat /version.json
	I1019 17:15:39.882473  256207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-663015
	I1019 17:15:39.882519  256207 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 17:15:39.882586  256207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-663015
	I1019 17:15:39.901461  256207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33079 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/default-k8s-diff-port-663015/id_rsa Username:docker}
	I1019 17:15:39.902472  256207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33079 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/default-k8s-diff-port-663015/id_rsa Username:docker}
	I1019 17:15:40.066790  256207 ssh_runner.go:195] Run: systemctl --version
	I1019 17:15:40.074104  256207 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 17:15:40.113529  256207 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 17:15:40.118448  256207 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 17:15:40.118515  256207 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 17:15:40.147295  256207 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1019 17:15:40.147323  256207 start.go:496] detecting cgroup driver to use...
	I1019 17:15:40.147353  256207 detect.go:190] detected "systemd" cgroup driver on host os
	I1019 17:15:40.147390  256207 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 17:15:40.164307  256207 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 17:15:40.178315  256207 docker.go:218] disabling cri-docker service (if available) ...
	I1019 17:15:40.178386  256207 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 17:15:40.198676  256207 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 17:15:40.216674  256207 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 17:15:40.304996  256207 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 17:15:40.396168  256207 docker.go:234] disabling docker service ...
	I1019 17:15:40.396238  256207 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 17:15:40.415926  256207 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 17:15:40.429962  256207 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 17:15:40.517635  256207 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 17:15:40.605312  256207 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 17:15:40.620142  256207 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 17:15:40.635325  256207 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 17:15:40.635377  256207 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:15:40.646022  256207 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1019 17:15:40.646091  256207 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:15:40.655485  256207 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:15:40.664440  256207 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:15:40.673866  256207 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 17:15:40.682533  256207 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:15:40.691463  256207 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:15:40.706131  256207 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:15:40.715518  256207 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 17:15:40.723727  256207 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 17:15:40.731898  256207 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:15:40.817545  256207 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 17:15:40.931737  256207 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 17:15:40.931830  256207 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 17:15:40.936504  256207 start.go:564] Will wait 60s for crictl version
	I1019 17:15:40.936568  256207 ssh_runner.go:195] Run: which crictl
	I1019 17:15:40.941086  256207 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 17:15:40.965153  256207 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 17:15:40.965238  256207 ssh_runner.go:195] Run: crio --version
	I1019 17:15:40.995614  256207 ssh_runner.go:195] Run: crio --version
	I1019 17:15:41.025460  256207 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1019 17:15:39.746530  251026 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:15:40.246220  251026 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:15:40.746245  251026 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:15:41.245838  251026 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:15:41.324153  251026 kubeadm.go:1114] duration metric: took 3.666852468s to wait for elevateKubeSystemPrivileges
	I1019 17:15:41.324196  251026 kubeadm.go:403] duration metric: took 16.35096448s to StartCluster
	I1019 17:15:41.324218  251026 settings.go:142] acquiring lock: {Name:mk205bd8d663b4a1850184d0f589700c0d2429b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:15:41.324284  251026 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-3731/kubeconfig
	I1019 17:15:41.325758  251026 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/kubeconfig: {Name:mk4cdfafbafeae33568865a60cf929c656d55c5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:15:41.326006  251026 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 17:15:41.326031  251026 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 17:15:41.326149  251026 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-090139"
	I1019 17:15:41.326020  251026 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1019 17:15:41.326185  251026 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-090139"
	I1019 17:15:41.326230  251026 config.go:182] Loaded profile config "embed-certs-090139": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:15:41.326230  251026 addons.go:70] Setting default-storageclass=true in profile "embed-certs-090139"
	I1019 17:15:41.326235  251026 host.go:66] Checking if "embed-certs-090139" exists ...
	I1019 17:15:41.326277  251026 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-090139"
	I1019 17:15:41.326790  251026 cli_runner.go:164] Run: docker container inspect embed-certs-090139 --format={{.State.Status}}
	I1019 17:15:41.326853  251026 cli_runner.go:164] Run: docker container inspect embed-certs-090139 --format={{.State.Status}}
	I1019 17:15:41.328042  251026 out.go:179] * Verifying Kubernetes components...
	I1019 17:15:41.332640  251026 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:15:41.354584  251026 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 17:15:41.357694  251026 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:15:41.357719  251026 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 17:15:41.357780  251026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-090139
	I1019 17:15:41.358779  251026 addons.go:239] Setting addon default-storageclass=true in "embed-certs-090139"
	I1019 17:15:41.358878  251026 host.go:66] Checking if "embed-certs-090139" exists ...
	I1019 17:15:41.359802  251026 cli_runner.go:164] Run: docker container inspect embed-certs-090139 --format={{.State.Status}}
	I1019 17:15:41.387971  251026 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 17:15:41.388096  251026 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 17:15:41.388221  251026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-090139
	I1019 17:15:41.394302  251026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/embed-certs-090139/id_rsa Username:docker}
	I1019 17:15:41.414598  251026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/embed-certs-090139/id_rsa Username:docker}
	I1019 17:15:41.440465  251026 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1019 17:15:41.496914  251026 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:15:41.524297  251026 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:15:41.548047  251026 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 17:15:41.653010  251026 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1019 17:15:41.654727  251026 node_ready.go:35] waiting up to 6m0s for node "embed-certs-090139" to be "Ready" ...
	I1019 17:15:41.846223  251026 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1019 17:15:41.026574  256207 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-663015 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 17:15:41.045566  256207 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1019 17:15:41.050440  256207 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:15:41.061004  256207 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-663015 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-663015 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 17:15:41.061140  256207 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:15:41.061184  256207 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:15:41.095482  256207 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:15:41.095509  256207 crio.go:433] Images already preloaded, skipping extraction
	I1019 17:15:41.095561  256207 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:15:41.123554  256207 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:15:41.123580  256207 cache_images.go:86] Images are preloaded, skipping loading
	I1019 17:15:41.123589  256207 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1019 17:15:41.123667  256207 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-663015 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-663015 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 17:15:41.123755  256207 ssh_runner.go:195] Run: crio config
	I1019 17:15:41.180314  256207 cni.go:84] Creating CNI manager for ""
	I1019 17:15:41.180335  256207 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:15:41.180352  256207 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 17:15:41.180374  256207 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-663015 NodeName:default-k8s-diff-port-663015 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 17:15:41.180496  256207 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-663015"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 17:15:41.180552  256207 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 17:15:41.189161  256207 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 17:15:41.189228  256207 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 17:15:41.197116  256207 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1019 17:15:41.210084  256207 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 17:15:41.226436  256207 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1019 17:15:41.241200  256207 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1019 17:15:41.245461  256207 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:15:41.256758  256207 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:15:41.359061  256207 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:15:41.402304  256207 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015 for IP: 192.168.85.2
	I1019 17:15:41.402328  256207 certs.go:195] generating shared ca certs ...
	I1019 17:15:41.402347  256207 certs.go:227] acquiring lock for ca certs: {Name:mk4c4a2dfd94a54e3626e99fce4b6f5183eeaf4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:15:41.402497  256207 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-3731/.minikube/ca.key
	I1019 17:15:41.402571  256207 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-3731/.minikube/proxy-client-ca.key
	I1019 17:15:41.402587  256207 certs.go:257] generating profile certs ...
	I1019 17:15:41.402658  256207 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/client.key
	I1019 17:15:41.402821  256207 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/client.crt with IP's: []
	I1019 17:15:41.634999  256207 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/client.crt ...
	I1019 17:15:41.635025  256207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/client.crt: {Name:mka0500442723f4230e6b879df857ac40daca047 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:15:41.635231  256207 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/client.key ...
	I1019 17:15:41.635245  256207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/client.key: {Name:mk43309fea32c11e9d1f599c181892c4b5610699 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:15:41.635361  256207 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/apiserver.key.d3e891db
	I1019 17:15:41.635375  256207 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/apiserver.crt.d3e891db with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1019 17:15:43.422477  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1019 17:15:43.422536  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1019 17:15:43.422595  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1019 17:15:43.453022  219832 cri.go:89] found id: "55181ab12a75f0828e8a499b545960e5dfe82c07024572a8d9b4566b0da1fba4"
	I1019 17:15:43.453047  219832 cri.go:89] found id: "9069a5be85580c182da77b88270f9798850d1b399012f0273ac8911e2aca89dc"
	I1019 17:15:43.453052  219832 cri.go:89] found id: ""
	I1019 17:15:43.453061  219832 logs.go:282] 2 containers: [55181ab12a75f0828e8a499b545960e5dfe82c07024572a8d9b4566b0da1fba4 9069a5be85580c182da77b88270f9798850d1b399012f0273ac8911e2aca89dc]
	I1019 17:15:43.453197  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:15:43.458100  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:15:43.462909  219832 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1019 17:15:43.462978  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1019 17:15:43.505928  219832 cri.go:89] found id: ""
	I1019 17:15:43.505962  219832 logs.go:282] 0 containers: []
	W1019 17:15:43.505972  219832 logs.go:284] No container was found matching "etcd"
	I1019 17:15:43.505979  219832 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1019 17:15:43.506052  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1019 17:15:42.209858  256207 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/apiserver.crt.d3e891db ...
	I1019 17:15:42.209884  256207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/apiserver.crt.d3e891db: {Name:mkfa7a703df391bd931b2cedfca2d3a4614585df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:15:42.210046  256207 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/apiserver.key.d3e891db ...
	I1019 17:15:42.210058  256207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/apiserver.key.d3e891db: {Name:mk5a4912e2b7a54fbc36d39103af69e291ffd333 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:15:42.210174  256207 certs.go:382] copying /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/apiserver.crt.d3e891db -> /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/apiserver.crt
	I1019 17:15:42.210256  256207 certs.go:386] copying /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/apiserver.key.d3e891db -> /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/apiserver.key
	I1019 17:15:42.210313  256207 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/proxy-client.key
	I1019 17:15:42.210329  256207 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/proxy-client.crt with IP's: []
	I1019 17:15:43.437025  256207 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/proxy-client.crt ...
	I1019 17:15:43.437056  256207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/proxy-client.crt: {Name:mk853093f2a301d2ed2f91679f038f64b5d184c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:15:43.437241  256207 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/proxy-client.key ...
	I1019 17:15:43.437258  256207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/proxy-client.key: {Name:mk6993b74fbbc0917420597a5a89aa15195ac013 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:15:43.437486  256207 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/7228.pem (1338 bytes)
	W1019 17:15:43.437541  256207 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-3731/.minikube/certs/7228_empty.pem, impossibly tiny 0 bytes
	I1019 17:15:43.437557  256207 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca-key.pem (1675 bytes)
	I1019 17:15:43.437596  256207 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem (1082 bytes)
	I1019 17:15:43.437630  256207 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/cert.pem (1123 bytes)
	I1019 17:15:43.437665  256207 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/key.pem (1679 bytes)
	I1019 17:15:43.437723  256207 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/files/etc/ssl/certs/72282.pem (1708 bytes)
	I1019 17:15:43.438520  256207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 17:15:43.461622  256207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1019 17:15:43.484190  256207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 17:15:43.516604  256207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1019 17:15:43.536832  256207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1019 17:15:43.555288  256207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1019 17:15:43.575301  256207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 17:15:43.596752  256207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1019 17:15:43.616537  256207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 17:15:43.638813  256207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/certs/7228.pem --> /usr/share/ca-certificates/7228.pem (1338 bytes)
	I1019 17:15:43.658945  256207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/files/etc/ssl/certs/72282.pem --> /usr/share/ca-certificates/72282.pem (1708 bytes)
	I1019 17:15:43.678498  256207 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 17:15:43.694833  256207 ssh_runner.go:195] Run: openssl version
	I1019 17:15:43.702262  256207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 17:15:43.712580  256207 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:15:43.717141  256207 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 16:21 /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:15:43.717213  256207 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:15:43.756376  256207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 17:15:43.766487  256207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7228.pem && ln -fs /usr/share/ca-certificates/7228.pem /etc/ssl/certs/7228.pem"
	I1019 17:15:43.775841  256207 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7228.pem
	I1019 17:15:43.779900  256207 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 16:26 /usr/share/ca-certificates/7228.pem
	I1019 17:15:43.779983  256207 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7228.pem
	I1019 17:15:43.815825  256207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7228.pem /etc/ssl/certs/51391683.0"
	I1019 17:15:43.824736  256207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/72282.pem && ln -fs /usr/share/ca-certificates/72282.pem /etc/ssl/certs/72282.pem"
	I1019 17:15:43.833598  256207 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/72282.pem
	I1019 17:15:43.837656  256207 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 16:26 /usr/share/ca-certificates/72282.pem
	I1019 17:15:43.837743  256207 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/72282.pem
	I1019 17:15:43.873845  256207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/72282.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 17:15:43.883853  256207 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 17:15:43.887820  256207 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1019 17:15:43.887885  256207 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-663015 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-663015 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:15:43.887968  256207 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 17:15:43.888016  256207 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 17:15:43.916830  256207 cri.go:89] found id: ""
	I1019 17:15:43.916906  256207 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 17:15:43.925402  256207 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1019 17:15:43.933710  256207 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1019 17:15:43.933764  256207 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1019 17:15:43.942092  256207 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1019 17:15:43.942112  256207 kubeadm.go:158] found existing configuration files:
	
	I1019 17:15:43.942164  256207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1019 17:15:43.950196  256207 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1019 17:15:43.950247  256207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1019 17:15:43.957888  256207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1019 17:15:43.965641  256207 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1019 17:15:43.965707  256207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1019 17:15:43.973279  256207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1019 17:15:43.981825  256207 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1019 17:15:43.981890  256207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1019 17:15:43.989801  256207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1019 17:15:43.997723  256207 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1019 17:15:43.997781  256207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1019 17:15:44.005560  256207 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1019 17:15:44.043217  256207 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1019 17:15:44.043297  256207 kubeadm.go:319] [preflight] Running pre-flight checks
	I1019 17:15:44.064754  256207 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1019 17:15:44.064864  256207 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1041-gcp
	I1019 17:15:44.064922  256207 kubeadm.go:319] OS: Linux
	I1019 17:15:44.065005  256207 kubeadm.go:319] CGROUPS_CPU: enabled
	I1019 17:15:44.065104  256207 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1019 17:15:44.065188  256207 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1019 17:15:44.065346  256207 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1019 17:15:44.065432  256207 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1019 17:15:44.065502  256207 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1019 17:15:44.065590  256207 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1019 17:15:44.065679  256207 kubeadm.go:319] CGROUPS_IO: enabled
	I1019 17:15:44.125877  256207 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1019 17:15:44.126034  256207 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1019 17:15:44.126157  256207 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1019 17:15:44.134504  256207 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1019 17:15:41.847425  251026 addons.go:515] duration metric: took 521.387494ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1019 17:15:42.157918  251026 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-090139" context rescaled to 1 replicas
	W1019 17:15:43.658171  251026 node_ready.go:57] node "embed-certs-090139" has "Ready":"False" status (will retry)
	I1019 17:15:44.137441  256207 out.go:252]   - Generating certificates and keys ...
	I1019 17:15:44.137515  256207 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1019 17:15:44.137580  256207 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1019 17:15:44.744872  256207 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1019 17:15:44.942133  256207 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1019 17:15:45.141425  256207 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1019 17:15:45.219605  256207 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1019 17:15:45.420047  256207 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1019 17:15:45.420219  256207 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-663015 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1019 17:15:45.657023  256207 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1019 17:15:45.657207  256207 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-663015 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1019 17:15:45.737294  256207 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1019 17:15:45.908211  256207 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1019 17:15:46.348591  256207 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1019 17:15:46.348696  256207 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1019 17:15:46.437698  256207 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1019 17:15:46.536617  256207 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1019 17:15:46.563465  256207 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1019 17:15:47.055139  256207 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1019 17:15:47.373764  256207 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1019 17:15:47.374284  256207 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1019 17:15:47.378449  256207 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1019 17:15:43.534868  219832 cri.go:89] found id: ""
	I1019 17:15:43.534899  219832 logs.go:282] 0 containers: []
	W1019 17:15:43.534912  219832 logs.go:284] No container was found matching "coredns"
	I1019 17:15:43.534920  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1019 17:15:43.534974  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1019 17:15:43.564642  219832 cri.go:89] found id: "409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:15:43.564669  219832 cri.go:89] found id: ""
	I1019 17:15:43.564680  219832 logs.go:282] 1 containers: [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f]
	I1019 17:15:43.564734  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:15:43.568925  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1019 17:15:43.569005  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1019 17:15:43.597841  219832 cri.go:89] found id: ""
	I1019 17:15:43.597866  219832 logs.go:282] 0 containers: []
	W1019 17:15:43.597875  219832 logs.go:284] No container was found matching "kube-proxy"
	I1019 17:15:43.597881  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1019 17:15:43.597934  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1019 17:15:43.627780  219832 cri.go:89] found id: "fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa"
	I1019 17:15:43.627811  219832 cri.go:89] found id: ""
	I1019 17:15:43.627822  219832 logs.go:282] 1 containers: [fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa]
	I1019 17:15:43.627878  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:15:43.631730  219832 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1019 17:15:43.631785  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1019 17:15:43.662084  219832 cri.go:89] found id: ""
	I1019 17:15:43.662111  219832 logs.go:282] 0 containers: []
	W1019 17:15:43.662122  219832 logs.go:284] No container was found matching "kindnet"
	I1019 17:15:43.662129  219832 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1019 17:15:43.662187  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1019 17:15:43.692768  219832 cri.go:89] found id: ""
	I1019 17:15:43.692802  219832 logs.go:282] 0 containers: []
	W1019 17:15:43.692814  219832 logs.go:284] No container was found matching "storage-provisioner"
	I1019 17:15:43.692832  219832 logs.go:123] Gathering logs for dmesg ...
	I1019 17:15:43.692845  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1019 17:15:43.709666  219832 logs.go:123] Gathering logs for describe nodes ...
	I1019 17:15:43.709700  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1019 17:15:46.158385  251026 node_ready.go:57] node "embed-certs-090139" has "Ready":"False" status (will retry)
	W1019 17:15:48.658027  251026 node_ready.go:57] node "embed-certs-090139" has "Ready":"False" status (will retry)
	I1019 17:15:47.379914  256207 out.go:252]   - Booting up control plane ...
	I1019 17:15:47.380054  256207 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1019 17:15:47.380149  256207 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1019 17:15:47.381527  256207 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1019 17:15:47.410678  256207 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1019 17:15:47.410879  256207 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1019 17:15:47.418293  256207 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1019 17:15:47.418443  256207 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1019 17:15:47.418525  256207 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1019 17:15:47.523529  256207 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1019 17:15:47.523666  256207 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1019 17:15:48.025432  256207 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 502.015996ms
	I1019 17:15:48.028488  256207 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1019 17:15:48.028611  256207 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1019 17:15:48.028746  256207 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1019 17:15:48.028854  256207 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1019 17:15:49.372406  256207 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.343795411s
	I1019 17:15:50.551580  256207 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.523033306s
	
	
	==> CRI-O <==
	Oct 19 17:15:13 no-preload-806996 crio[559]: time="2025-10-19T17:15:13.693284435Z" level=info msg="Started container" PID=1733 containerID=61cf171e18d3bf6d01a701feda86b01ba9fff473f71c9aef818cad9a7381ce82 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s96d5/dashboard-metrics-scraper id=9cf70d60-a4dc-4487-a41e-40bdfdb602a6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=45b9c7496b54b527258d3f06df78dc1260eaa53cb47dff8ab74c4b8b47970d0a
	Oct 19 17:15:14 no-preload-806996 crio[559]: time="2025-10-19T17:15:14.648842917Z" level=info msg="Removing container: 794d02920d03582a1df0e11b4922e49d7e5e8f468aa2500ad25c48a30102b14b" id=a177b77f-6cbc-4086-802b-8ce8e964a03d name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 17:15:14 no-preload-806996 crio[559]: time="2025-10-19T17:15:14.665136363Z" level=info msg="Removed container 794d02920d03582a1df0e11b4922e49d7e5e8f468aa2500ad25c48a30102b14b: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s96d5/dashboard-metrics-scraper" id=a177b77f-6cbc-4086-802b-8ce8e964a03d name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 17:15:33 no-preload-806996 crio[559]: time="2025-10-19T17:15:33.576967804Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=2891ebb9-ec2f-4aa1-9d36-768dbf3a743f name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:15:33 no-preload-806996 crio[559]: time="2025-10-19T17:15:33.577925618Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=fcbfe6f5-bbde-4128-924e-3d5e3851e742 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:15:33 no-preload-806996 crio[559]: time="2025-10-19T17:15:33.578935404Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s96d5/dashboard-metrics-scraper" id=42c0ddf6-276d-41df-bb92-886e75b99117 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:15:33 no-preload-806996 crio[559]: time="2025-10-19T17:15:33.579231904Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:15:33 no-preload-806996 crio[559]: time="2025-10-19T17:15:33.585161646Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:15:33 no-preload-806996 crio[559]: time="2025-10-19T17:15:33.585629929Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:15:33 no-preload-806996 crio[559]: time="2025-10-19T17:15:33.630407052Z" level=info msg="Created container e06905acc29723550f6e08a642ca693a6217ddf074925f5b791b25019e9c975c: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s96d5/dashboard-metrics-scraper" id=42c0ddf6-276d-41df-bb92-886e75b99117 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:15:33 no-preload-806996 crio[559]: time="2025-10-19T17:15:33.631080531Z" level=info msg="Starting container: e06905acc29723550f6e08a642ca693a6217ddf074925f5b791b25019e9c975c" id=7645c12a-4a95-4b0a-9bb6-2b725a8ea603 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 17:15:33 no-preload-806996 crio[559]: time="2025-10-19T17:15:33.633169583Z" level=info msg="Started container" PID=1747 containerID=e06905acc29723550f6e08a642ca693a6217ddf074925f5b791b25019e9c975c description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s96d5/dashboard-metrics-scraper id=7645c12a-4a95-4b0a-9bb6-2b725a8ea603 name=/runtime.v1.RuntimeService/StartContainer sandboxID=45b9c7496b54b527258d3f06df78dc1260eaa53cb47dff8ab74c4b8b47970d0a
	Oct 19 17:15:33 no-preload-806996 crio[559]: time="2025-10-19T17:15:33.707781083Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=6aa3e0e3-e737-4de9-aba5-662227ab9fcf name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:15:33 no-preload-806996 crio[559]: time="2025-10-19T17:15:33.708749265Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=744d2e3d-d8c7-4228-8ad5-53bba62757cd name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:15:33 no-preload-806996 crio[559]: time="2025-10-19T17:15:33.709824488Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=d62ab1b4-0490-4263-a067-a286fec42254 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:15:33 no-preload-806996 crio[559]: time="2025-10-19T17:15:33.71015526Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:15:33 no-preload-806996 crio[559]: time="2025-10-19T17:15:33.712255134Z" level=info msg="Removing container: 61cf171e18d3bf6d01a701feda86b01ba9fff473f71c9aef818cad9a7381ce82" id=97c6c55d-a3d9-4891-9eb8-8ace769272e9 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 17:15:33 no-preload-806996 crio[559]: time="2025-10-19T17:15:33.714925804Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:15:33 no-preload-806996 crio[559]: time="2025-10-19T17:15:33.715138711Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/55b50df613592c7ef25de03b8250e5eee1e826c0c304a3ebc2b95c1cd1a82dca/merged/etc/passwd: no such file or directory"
	Oct 19 17:15:33 no-preload-806996 crio[559]: time="2025-10-19T17:15:33.715169688Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/55b50df613592c7ef25de03b8250e5eee1e826c0c304a3ebc2b95c1cd1a82dca/merged/etc/group: no such file or directory"
	Oct 19 17:15:33 no-preload-806996 crio[559]: time="2025-10-19T17:15:33.715469408Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:15:33 no-preload-806996 crio[559]: time="2025-10-19T17:15:33.735849171Z" level=info msg="Removed container 61cf171e18d3bf6d01a701feda86b01ba9fff473f71c9aef818cad9a7381ce82: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s96d5/dashboard-metrics-scraper" id=97c6c55d-a3d9-4891-9eb8-8ace769272e9 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 17:15:33 no-preload-806996 crio[559]: time="2025-10-19T17:15:33.746813627Z" level=info msg="Created container 382f76d5c0a2c6c8532920ea02a4812107d5461dd13a4f6d3d05edeadc2d5db6: kube-system/storage-provisioner/storage-provisioner" id=d62ab1b4-0490-4263-a067-a286fec42254 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:15:33 no-preload-806996 crio[559]: time="2025-10-19T17:15:33.747443384Z" level=info msg="Starting container: 382f76d5c0a2c6c8532920ea02a4812107d5461dd13a4f6d3d05edeadc2d5db6" id=05b11498-30ce-4896-a5a8-ce5dafe1cd3f name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 17:15:33 no-preload-806996 crio[559]: time="2025-10-19T17:15:33.749713979Z" level=info msg="Started container" PID=1757 containerID=382f76d5c0a2c6c8532920ea02a4812107d5461dd13a4f6d3d05edeadc2d5db6 description=kube-system/storage-provisioner/storage-provisioner id=05b11498-30ce-4896-a5a8-ce5dafe1cd3f name=/runtime.v1.RuntimeService/StartContainer sandboxID=57a0cbf50ea377e3b6c16260ec883cfd86d8f81c96a93f54ea65a283ec4c9a3b
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	382f76d5c0a2c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           19 seconds ago      Running             storage-provisioner         1                   57a0cbf50ea37       storage-provisioner                          kube-system
	e06905acc2972       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           19 seconds ago      Exited              dashboard-metrics-scraper   2                   45b9c7496b54b       dashboard-metrics-scraper-6ffb444bf9-s96d5   kubernetes-dashboard
	0a2f39ea915fe       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   42 seconds ago      Running             kubernetes-dashboard        0                   b61f8b3a8c704       kubernetes-dashboard-855c9754f9-8t886        kubernetes-dashboard
	df36eb3175777       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           50 seconds ago      Running             busybox                     1                   da2521ad0be89       busybox                                      default
	c4ea4d266cd9b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           50 seconds ago      Running             coredns                     0                   bfb2b4e132e9f       coredns-66bc5c9577-s4dxw                     kube-system
	8d4d5fee23b45       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           50 seconds ago      Running             kindnet-cni                 0                   9739150f72253       kindnet-zndcx                                kube-system
	47a6f8337391e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           50 seconds ago      Exited              storage-provisioner         0                   57a0cbf50ea37       storage-provisioner                          kube-system
	b6a81c8dbabbd       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           50 seconds ago      Running             kube-proxy                  0                   a24d941074ec6       kube-proxy-76f5v                             kube-system
	57798f07866c6       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           53 seconds ago      Running             etcd                        0                   4887fa43c34be       etcd-no-preload-806996                       kube-system
	fce011c2a0450       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           53 seconds ago      Running             kube-apiserver              0                   fc97d656280e7       kube-apiserver-no-preload-806996             kube-system
	bc11f1b63d4f6       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           53 seconds ago      Running             kube-scheduler              0                   951a0793b721b       kube-scheduler-no-preload-806996             kube-system
	59114e638c3e3       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           53 seconds ago      Running             kube-controller-manager     0                   4e0d9b4f2640f       kube-controller-manager-no-preload-806996    kube-system
	
	
	==> coredns [c4ea4d266cd9b9e2ae1a1e77308823cee3564f252f0fe2fd4ee039dde6dedd7a] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38216 - 37030 "HINFO IN 7320756310291767829.2100046436382029024. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.080923161s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-806996
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-806996
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34
	                    minikube.k8s.io/name=no-preload-806996
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T17_14_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 17:14:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-806996
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 17:15:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 17:15:33 +0000   Sun, 19 Oct 2025 17:14:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 17:15:33 +0000   Sun, 19 Oct 2025 17:14:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 17:15:33 +0000   Sun, 19 Oct 2025 17:14:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 17:15:33 +0000   Sun, 19 Oct 2025 17:15:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-806996
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                18a9e783-21eb-4794-bbc4-d787e21fb79d
	  Boot ID:                    74c6737e-3c93-45ee-b810-4373d2d70b9b
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 coredns-66bc5c9577-s4dxw                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     103s
	  kube-system                 etcd-no-preload-806996                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         109s
	  kube-system                 kindnet-zndcx                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      103s
	  kube-system                 kube-apiserver-no-preload-806996              250m (3%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-controller-manager-no-preload-806996     200m (2%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-proxy-76f5v                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-scheduler-no-preload-806996              100m (1%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-s96d5    0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-8t886         0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 102s                 kube-proxy       
	  Normal  Starting                 50s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  114s (x8 over 114s)  kubelet          Node no-preload-806996 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    114s (x8 over 114s)  kubelet          Node no-preload-806996 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     114s (x8 over 114s)  kubelet          Node no-preload-806996 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     109s                 kubelet          Node no-preload-806996 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  109s                 kubelet          Node no-preload-806996 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    109s                 kubelet          Node no-preload-806996 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 109s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           104s                 node-controller  Node no-preload-806996 event: Registered Node no-preload-806996 in Controller
	  Normal  NodeReady                90s                  kubelet          Node no-preload-806996 status is now: NodeReady
	  Normal  Starting                 54s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 54s)    kubelet          Node no-preload-806996 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 54s)    kubelet          Node no-preload-806996 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 54s)    kubelet          Node no-preload-806996 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           48s                  node-controller  Node no-preload-806996 event: Registered Node no-preload-806996 in Controller
	
	
	==> dmesg <==
	[  +0.102617] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027869] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.458770] kauditd_printk_skb: 47 callbacks suppressed
	[Oct19 16:23] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.054620] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023908] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023878] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023848] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023943] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +2.047764] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +4.031517] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +8.127172] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[ +16.382276] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[Oct19 16:24] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	
	
	==> etcd [57798f07866c641800ed16ace6a8acd5b23639cda988891b8373c1b5db7e8dca] <==
	{"level":"warn","ts":"2025-10-19T17:15:01.746530Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:01.753061Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:01.759509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:01.766222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:01.778270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:01.784972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:01.792274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:01.798719Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:01.805122Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:01.811095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:01.817519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:01.824043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:01.831447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:01.837495Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:01.844426Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:01.851408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:01.862171Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:01.869313Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:01.875709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:01.927354Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:36.395538Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"287.779097ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-19T17:15:36.395646Z","caller":"traceutil/trace.go:172","msg":"trace[1325953621] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:662; }","duration":"287.90836ms","start":"2025-10-19T17:15:36.107720Z","end":"2025-10-19T17:15:36.395629Z","steps":["trace[1325953621] 'range keys from in-memory index tree'  (duration: 287.728697ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-19T17:15:36.396282Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"266.167898ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356070003552425 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:589 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:4373 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-19T17:15:36.396447Z","caller":"traceutil/trace.go:172","msg":"trace[1111645797] transaction","detail":"{read_only:false; response_revision:663; number_of_response:1; }","duration":"396.631894ms","start":"2025-10-19T17:15:35.999793Z","end":"2025-10-19T17:15:36.396425Z","steps":["trace[1111645797] 'process raft request'  (duration: 129.743372ms)","trace[1111645797] 'compare'  (duration: 265.992638ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-19T17:15:36.396757Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-19T17:15:35.999771Z","time spent":"396.815389ms","remote":"127.0.0.1:43262","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4422,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:589 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:4373 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >"}
	
	
	==> kernel <==
	 17:15:53 up 58 min,  0 user,  load average: 4.00, 3.00, 1.80
	Linux no-preload-806996 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8d4d5fee23b457fb8794d8484d04e1e2bd58f052ff7b005cbc54b4452aacbedf] <==
	I1019 17:15:03.211298       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 17:15:03.211636       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1019 17:15:03.211868       1 main.go:148] setting mtu 1500 for CNI 
	I1019 17:15:03.211891       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 17:15:03.211914       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T17:15:03Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 17:15:03.508830       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 17:15:03.508949       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 17:15:03.508986       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 17:15:03.509411       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1019 17:15:03.809351       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 17:15:03.809391       1 metrics.go:72] Registering metrics
	I1019 17:15:03.809510       1 controller.go:711] "Syncing nftables rules"
	I1019 17:15:13.432552       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1019 17:15:13.432611       1 main.go:301] handling current node
	I1019 17:15:23.432610       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1019 17:15:23.432645       1 main.go:301] handling current node
	I1019 17:15:33.431908       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1019 17:15:33.431946       1 main.go:301] handling current node
	I1019 17:15:43.433204       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1019 17:15:43.433239       1 main.go:301] handling current node
	
	
	==> kube-apiserver [fce011c2a0450511fcc8dd7c1c20bab17cded7471868ef01ff9f8bd81c4e288b] <==
	I1019 17:15:02.404544       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1019 17:15:02.403672       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1019 17:15:02.403333       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1019 17:15:02.405350       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1019 17:15:02.405397       1 aggregator.go:171] initial CRD sync complete...
	I1019 17:15:02.405416       1 autoregister_controller.go:144] Starting autoregister controller
	I1019 17:15:02.405422       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1019 17:15:02.405428       1 cache.go:39] Caches are synced for autoregister controller
	E1019 17:15:02.410489       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1019 17:15:02.412269       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1019 17:15:02.417886       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1019 17:15:02.427812       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1019 17:15:02.427842       1 policy_source.go:240] refreshing policies
	I1019 17:15:02.459523       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 17:15:02.556595       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 17:15:02.691954       1 controller.go:667] quota admission added evaluator for: namespaces
	I1019 17:15:02.724574       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1019 17:15:02.743142       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 17:15:02.750259       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 17:15:02.793748       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.60.237"}
	I1019 17:15:02.803954       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.119.238"}
	I1019 17:15:03.305194       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 17:15:05.793250       1 controller.go:667] quota admission added evaluator for: endpoints
	I1019 17:15:06.140307       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1019 17:15:06.290363       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [59114e638c3e345b69d996de509f78fbdb413e207c5f1f5aaa29fa9072561ec7] <==
	I1019 17:15:05.703786       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1019 17:15:05.706122       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1019 17:15:05.708454       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1019 17:15:05.718783       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1019 17:15:05.736258       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1019 17:15:05.736391       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1019 17:15:05.736416       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1019 17:15:05.736730       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1019 17:15:05.736835       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1019 17:15:05.736873       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1019 17:15:05.737035       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1019 17:15:05.737718       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1019 17:15:05.737731       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1019 17:15:05.737724       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1019 17:15:05.737812       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1019 17:15:05.737831       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1019 17:15:05.737919       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1019 17:15:05.738095       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-806996"
	I1019 17:15:05.738159       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1019 17:15:05.740230       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1019 17:15:05.740323       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1019 17:15:05.741465       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1019 17:15:05.743207       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1019 17:15:05.745603       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 17:15:05.769107       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [b6a81c8dbabbdc0f923d7667d7492b6e13121a27eaf4b8c9c2155e06d06dda4c] <==
	I1019 17:15:02.986042       1 server_linux.go:53] "Using iptables proxy"
	I1019 17:15:03.051892       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 17:15:03.152047       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 17:15:03.152114       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1019 17:15:03.152180       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 17:15:03.171012       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 17:15:03.171085       1 server_linux.go:132] "Using iptables Proxier"
	I1019 17:15:03.176376       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 17:15:03.176848       1 server.go:527] "Version info" version="v1.34.1"
	I1019 17:15:03.176893       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:15:03.178657       1 config.go:106] "Starting endpoint slice config controller"
	I1019 17:15:03.178774       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 17:15:03.178921       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 17:15:03.179011       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 17:15:03.179479       1 config.go:309] "Starting node config controller"
	I1019 17:15:03.179564       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 17:15:03.179590       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 17:15:03.179013       1 config.go:200] "Starting service config controller"
	I1019 17:15:03.180594       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 17:15:03.279093       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1019 17:15:03.280297       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 17:15:03.281490       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [bc11f1b63d4f685d90c3f222bd54906e082991e7bbcad2b179d7e8a591d49f53] <==
	I1019 17:15:00.600326       1 serving.go:386] Generated self-signed cert in-memory
	W1019 17:15:02.338770       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1019 17:15:02.338827       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1019 17:15:02.338840       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1019 17:15:02.338854       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1019 17:15:02.382700       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1019 17:15:02.382738       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:15:02.385908       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 17:15:02.386031       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 17:15:02.387127       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1019 17:15:02.387293       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1019 17:15:02.486493       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 19 17:15:05 no-preload-806996 kubelet[710]: I1019 17:15:05.943279     710 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 19 17:15:06 no-preload-806996 kubelet[710]: I1019 17:15:06.476259     710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/21d75a06-e2e2-4dc0-b5d9-58b551d6f1e7-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-8t886\" (UID: \"21d75a06-e2e2-4dc0-b5d9-58b551d6f1e7\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8t886"
	Oct 19 17:15:06 no-preload-806996 kubelet[710]: I1019 17:15:06.476305     710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/6d910290-e686-4092-b130-ac3aae81b534-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-s96d5\" (UID: \"6d910290-e686-4092-b130-ac3aae81b534\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s96d5"
	Oct 19 17:15:06 no-preload-806996 kubelet[710]: I1019 17:15:06.476321     710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btkmf\" (UniqueName: \"kubernetes.io/projected/21d75a06-e2e2-4dc0-b5d9-58b551d6f1e7-kube-api-access-btkmf\") pod \"kubernetes-dashboard-855c9754f9-8t886\" (UID: \"21d75a06-e2e2-4dc0-b5d9-58b551d6f1e7\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8t886"
	Oct 19 17:15:06 no-preload-806996 kubelet[710]: I1019 17:15:06.476347     710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6ksz\" (UniqueName: \"kubernetes.io/projected/6d910290-e686-4092-b130-ac3aae81b534-kube-api-access-p6ksz\") pod \"dashboard-metrics-scraper-6ffb444bf9-s96d5\" (UID: \"6d910290-e686-4092-b130-ac3aae81b534\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s96d5"
	Oct 19 17:15:12 no-preload-806996 kubelet[710]: I1019 17:15:12.579690     710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8t886" podStartSLOduration=3.15677728 podStartE2EDuration="6.579664106s" podCreationTimestamp="2025-10-19 17:15:06 +0000 UTC" firstStartedPulling="2025-10-19 17:15:06.693290995 +0000 UTC m=+7.210931605" lastFinishedPulling="2025-10-19 17:15:10.116177818 +0000 UTC m=+10.633818431" observedRunningTime="2025-10-19 17:15:10.652357832 +0000 UTC m=+11.169998452" watchObservedRunningTime="2025-10-19 17:15:12.579664106 +0000 UTC m=+13.097304719"
	Oct 19 17:15:13 no-preload-806996 kubelet[710]: I1019 17:15:13.641911     710 scope.go:117] "RemoveContainer" containerID="794d02920d03582a1df0e11b4922e49d7e5e8f468aa2500ad25c48a30102b14b"
	Oct 19 17:15:14 no-preload-806996 kubelet[710]: I1019 17:15:14.646862     710 scope.go:117] "RemoveContainer" containerID="794d02920d03582a1df0e11b4922e49d7e5e8f468aa2500ad25c48a30102b14b"
	Oct 19 17:15:14 no-preload-806996 kubelet[710]: I1019 17:15:14.647087     710 scope.go:117] "RemoveContainer" containerID="61cf171e18d3bf6d01a701feda86b01ba9fff473f71c9aef818cad9a7381ce82"
	Oct 19 17:15:14 no-preload-806996 kubelet[710]: E1019 17:15:14.647267     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s96d5_kubernetes-dashboard(6d910290-e686-4092-b130-ac3aae81b534)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s96d5" podUID="6d910290-e686-4092-b130-ac3aae81b534"
	Oct 19 17:15:15 no-preload-806996 kubelet[710]: I1019 17:15:15.651858     710 scope.go:117] "RemoveContainer" containerID="61cf171e18d3bf6d01a701feda86b01ba9fff473f71c9aef818cad9a7381ce82"
	Oct 19 17:15:15 no-preload-806996 kubelet[710]: E1019 17:15:15.652144     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s96d5_kubernetes-dashboard(6d910290-e686-4092-b130-ac3aae81b534)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s96d5" podUID="6d910290-e686-4092-b130-ac3aae81b534"
	Oct 19 17:15:20 no-preload-806996 kubelet[710]: I1019 17:15:20.120529     710 scope.go:117] "RemoveContainer" containerID="61cf171e18d3bf6d01a701feda86b01ba9fff473f71c9aef818cad9a7381ce82"
	Oct 19 17:15:20 no-preload-806996 kubelet[710]: E1019 17:15:20.120814     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s96d5_kubernetes-dashboard(6d910290-e686-4092-b130-ac3aae81b534)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s96d5" podUID="6d910290-e686-4092-b130-ac3aae81b534"
	Oct 19 17:15:33 no-preload-806996 kubelet[710]: I1019 17:15:33.576521     710 scope.go:117] "RemoveContainer" containerID="61cf171e18d3bf6d01a701feda86b01ba9fff473f71c9aef818cad9a7381ce82"
	Oct 19 17:15:33 no-preload-806996 kubelet[710]: I1019 17:15:33.707410     710 scope.go:117] "RemoveContainer" containerID="47a6f8337391e92a1eb11ec931d4dcb05e8cc253a9eff0440440e61a960f5336"
	Oct 19 17:15:33 no-preload-806996 kubelet[710]: I1019 17:15:33.709367     710 scope.go:117] "RemoveContainer" containerID="61cf171e18d3bf6d01a701feda86b01ba9fff473f71c9aef818cad9a7381ce82"
	Oct 19 17:15:33 no-preload-806996 kubelet[710]: I1019 17:15:33.709679     710 scope.go:117] "RemoveContainer" containerID="e06905acc29723550f6e08a642ca693a6217ddf074925f5b791b25019e9c975c"
	Oct 19 17:15:33 no-preload-806996 kubelet[710]: E1019 17:15:33.709843     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s96d5_kubernetes-dashboard(6d910290-e686-4092-b130-ac3aae81b534)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s96d5" podUID="6d910290-e686-4092-b130-ac3aae81b534"
	Oct 19 17:15:40 no-preload-806996 kubelet[710]: I1019 17:15:40.120348     710 scope.go:117] "RemoveContainer" containerID="e06905acc29723550f6e08a642ca693a6217ddf074925f5b791b25019e9c975c"
	Oct 19 17:15:40 no-preload-806996 kubelet[710]: E1019 17:15:40.120555     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s96d5_kubernetes-dashboard(6d910290-e686-4092-b130-ac3aae81b534)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s96d5" podUID="6d910290-e686-4092-b130-ac3aae81b534"
	Oct 19 17:15:50 no-preload-806996 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 19 17:15:50 no-preload-806996 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 19 17:15:50 no-preload-806996 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 19 17:15:50 no-preload-806996 systemd[1]: kubelet.service: Consumed 1.701s CPU time.
	
	
	==> kubernetes-dashboard [0a2f39ea915fe96227067df463e324062c88165d1d33630023a26f599191e95e] <==
	2025/10/19 17:15:10 Using namespace: kubernetes-dashboard
	2025/10/19 17:15:10 Using in-cluster config to connect to apiserver
	2025/10/19 17:15:10 Using secret token for csrf signing
	2025/10/19 17:15:10 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/19 17:15:10 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/19 17:15:10 Successful initial request to the apiserver, version: v1.34.1
	2025/10/19 17:15:10 Generating JWE encryption key
	2025/10/19 17:15:10 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/19 17:15:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/19 17:15:10 Initializing JWE encryption key from synchronized object
	2025/10/19 17:15:10 Creating in-cluster Sidecar client
	2025/10/19 17:15:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/19 17:15:10 Serving insecurely on HTTP port: 9090
	2025/10/19 17:15:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/19 17:15:10 Starting overwatch
	
	
	==> storage-provisioner [382f76d5c0a2c6c8532920ea02a4812107d5461dd13a4f6d3d05edeadc2d5db6] <==
	I1019 17:15:33.762939       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1019 17:15:33.770989       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1019 17:15:33.771048       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1019 17:15:33.773274       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:15:37.228985       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:15:41.489843       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:15:45.087965       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:15:48.142250       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:15:51.164285       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:15:51.168454       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 17:15:51.168625       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1019 17:15:51.168754       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ade808a3-50b3-4da9-9740-0f1294aa75ce", APIVersion:"v1", ResourceVersion:"669", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-806996_06456bdc-e381-45d2-84d4-72bdf680e14a became leader
	I1019 17:15:51.168804       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-806996_06456bdc-e381-45d2-84d4-72bdf680e14a!
	W1019 17:15:51.170559       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:15:51.174574       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 17:15:51.269126       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-806996_06456bdc-e381-45d2-84d4-72bdf680e14a!
	W1019 17:15:53.185998       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:15:53.193861       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [47a6f8337391e92a1eb11ec931d4dcb05e8cc253a9eff0440440e61a960f5336] <==
	I1019 17:15:02.956331       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1019 17:15:32.960600       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-806996 -n no-preload-806996
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-806996 -n no-preload-806996: exit status 2 (346.56631ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-806996 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-806996
helpers_test.go:243: (dbg) docker inspect no-preload-806996:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2bbe9c0feed5aedbcb3c0e82084e03391100394ba6f66d3f42cedc2d4c4a5365",
	        "Created": "2025-10-19T17:13:34.261937795Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 246103,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T17:14:53.323922428Z",
	            "FinishedAt": "2025-10-19T17:14:51.990160047Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/2bbe9c0feed5aedbcb3c0e82084e03391100394ba6f66d3f42cedc2d4c4a5365/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2bbe9c0feed5aedbcb3c0e82084e03391100394ba6f66d3f42cedc2d4c4a5365/hostname",
	        "HostsPath": "/var/lib/docker/containers/2bbe9c0feed5aedbcb3c0e82084e03391100394ba6f66d3f42cedc2d4c4a5365/hosts",
	        "LogPath": "/var/lib/docker/containers/2bbe9c0feed5aedbcb3c0e82084e03391100394ba6f66d3f42cedc2d4c4a5365/2bbe9c0feed5aedbcb3c0e82084e03391100394ba6f66d3f42cedc2d4c4a5365-json.log",
	        "Name": "/no-preload-806996",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-806996:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-806996",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2bbe9c0feed5aedbcb3c0e82084e03391100394ba6f66d3f42cedc2d4c4a5365",
	                "LowerDir": "/var/lib/docker/overlay2/6fc43257768a3bf4fe5dabf66ba1cda632762e15d5c29b3c95b7c6c08c654924-init/diff:/var/lib/docker/overlay2/0516a6de822f96a429e000bb6523437ca93a66a6aecaba561c6abf45d0d1defe/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6fc43257768a3bf4fe5dabf66ba1cda632762e15d5c29b3c95b7c6c08c654924/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6fc43257768a3bf4fe5dabf66ba1cda632762e15d5c29b3c95b7c6c08c654924/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6fc43257768a3bf4fe5dabf66ba1cda632762e15d5c29b3c95b7c6c08c654924/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-806996",
	                "Source": "/var/lib/docker/volumes/no-preload-806996/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-806996",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-806996",
	                "name.minikube.sigs.k8s.io": "no-preload-806996",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "742448deeade6bba535c1fcfd233fb079772c1b79fa293486310ab459cdb23cc",
	            "SandboxKey": "/var/run/docker/netns/742448deeade",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-806996": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fe:69:aa:d2:a9:a2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "73bac96357aad3b7cfe938f1f5873c93097c59bb8fc57dcc5d67449be0149246",
	                    "EndpointID": "3505c79141851bebc12827e25b33999c4efccd568095e197732fdd71bb9fd76c",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-806996",
	                        "2bbe9c0feed5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-806996 -n no-preload-806996
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-806996 -n no-preload-806996: exit status 2 (354.51926ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-806996 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-806996 logs -n 25: (1.166394432s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p stopped-upgrade-659566 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                      │ stopped-upgrade-659566       │ jenkins │ v1.37.0 │ 19 Oct 25 17:12 UTC │ 19 Oct 25 17:12 UTC │
	│ start   │ -p missing-upgrade-447724 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                      │ missing-upgrade-447724       │ jenkins │ v1.37.0 │ 19 Oct 25 17:12 UTC │ 19 Oct 25 17:13 UTC │
	│ delete  │ -p stopped-upgrade-659566                                                                                                                                                                                                                     │ stopped-upgrade-659566       │ jenkins │ v1.37.0 │ 19 Oct 25 17:12 UTC │ 19 Oct 25 17:13 UTC │
	│ start   │ -p old-k8s-version-904967 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-904967       │ jenkins │ v1.37.0 │ 19 Oct 25 17:13 UTC │ 19 Oct 25 17:13 UTC │
	│ delete  │ -p missing-upgrade-447724                                                                                                                                                                                                                     │ missing-upgrade-447724       │ jenkins │ v1.37.0 │ 19 Oct 25 17:13 UTC │ 19 Oct 25 17:13 UTC │
	│ start   │ -p no-preload-806996 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-806996            │ jenkins │ v1.37.0 │ 19 Oct 25 17:13 UTC │ 19 Oct 25 17:14 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-904967 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-904967       │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │                     │
	│ stop    │ -p old-k8s-version-904967 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-904967       │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │ 19 Oct 25 17:14 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-904967 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-904967       │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │ 19 Oct 25 17:14 UTC │
	│ start   │ -p old-k8s-version-904967 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-904967       │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │ 19 Oct 25 17:15 UTC │
	│ addons  │ enable metrics-server -p no-preload-806996 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-806996            │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │                     │
	│ stop    │ -p no-preload-806996 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-806996            │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │ 19 Oct 25 17:14 UTC │
	│ addons  │ enable dashboard -p no-preload-806996 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-806996            │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │ 19 Oct 25 17:14 UTC │
	│ start   │ -p no-preload-806996 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-806996            │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │ 19 Oct 25 17:15 UTC │
	│ start   │ -p cert-expiration-132648 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-132648       │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ delete  │ -p cert-expiration-132648                                                                                                                                                                                                                     │ cert-expiration-132648       │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ start   │ -p embed-certs-090139 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-090139           │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │                     │
	│ image   │ old-k8s-version-904967 image list --format=json                                                                                                                                                                                               │ old-k8s-version-904967       │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ pause   │ -p old-k8s-version-904967 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-904967       │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │                     │
	│ delete  │ -p old-k8s-version-904967                                                                                                                                                                                                                     │ old-k8s-version-904967       │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ delete  │ -p old-k8s-version-904967                                                                                                                                                                                                                     │ old-k8s-version-904967       │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ delete  │ -p disable-driver-mounts-858297                                                                                                                                                                                                               │ disable-driver-mounts-858297 │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ start   │ -p default-k8s-diff-port-663015 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-663015 │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │                     │
	│ image   │ no-preload-806996 image list --format=json                                                                                                                                                                                                    │ no-preload-806996            │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ pause   │ -p no-preload-806996 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-806996            │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 17:15:31
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 17:15:31.903216  256207 out.go:360] Setting OutFile to fd 1 ...
	I1019 17:15:31.903517  256207 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:15:31.903527  256207 out.go:374] Setting ErrFile to fd 2...
	I1019 17:15:31.903532  256207 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:15:31.903794  256207 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3731/.minikube/bin
	I1019 17:15:31.904315  256207 out.go:368] Setting JSON to false
	I1019 17:15:31.905475  256207 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3478,"bootTime":1760890654,"procs":348,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 17:15:31.905563  256207 start.go:143] virtualization: kvm guest
	I1019 17:15:31.907811  256207 out.go:179] * [default-k8s-diff-port-663015] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 17:15:31.909608  256207 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 17:15:31.909635  256207 notify.go:221] Checking for updates...
	I1019 17:15:31.912686  256207 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 17:15:31.914136  256207 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-3731/kubeconfig
	I1019 17:15:31.915707  256207 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-3731/.minikube
	I1019 17:15:31.917001  256207 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 17:15:31.918498  256207 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 17:15:31.920662  256207 config.go:182] Loaded profile config "embed-certs-090139": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:15:31.920791  256207 config.go:182] Loaded profile config "kubernetes-upgrade-318879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:15:31.920926  256207 config.go:182] Loaded profile config "no-preload-806996": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:15:31.921038  256207 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 17:15:31.952135  256207 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1019 17:15:31.952246  256207 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:15:32.037820  256207 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-19 17:15:32.02422055 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 17:15:32.037984  256207 docker.go:319] overlay module found
	I1019 17:15:32.043207  256207 out.go:179] * Using the docker driver based on user configuration
	I1019 17:15:32.044705  256207 start.go:309] selected driver: docker
	I1019 17:15:32.044734  256207 start.go:930] validating driver "docker" against <nil>
	I1019 17:15:32.044748  256207 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 17:15:32.045394  256207 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:15:32.126872  256207 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-19 17:15:32.112012764 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 17:15:32.127191  256207 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1019 17:15:32.127427  256207 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 17:15:32.129760  256207 out.go:179] * Using Docker driver with root privileges
	I1019 17:15:32.131199  256207 cni.go:84] Creating CNI manager for ""
	I1019 17:15:32.131280  256207 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:15:32.131297  256207 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1019 17:15:32.131405  256207 start.go:353] cluster config:
	{Name:default-k8s-diff-port-663015 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-663015 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:15:32.132908  256207 out.go:179] * Starting "default-k8s-diff-port-663015" primary control-plane node in "default-k8s-diff-port-663015" cluster
	I1019 17:15:32.134080  256207 cache.go:124] Beginning downloading kic base image for docker with crio
	I1019 17:15:32.135380  256207 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 17:15:32.136715  256207 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:15:32.136756  256207 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-3731/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1019 17:15:32.136766  256207 cache.go:59] Caching tarball of preloaded images
	I1019 17:15:32.136777  256207 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 17:15:32.136869  256207 preload.go:233] Found /home/jenkins/minikube-integration/21683-3731/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1019 17:15:32.136879  256207 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 17:15:32.137002  256207 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/config.json ...
	I1019 17:15:32.137034  256207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/config.json: {Name:mke61c039aa897d9a6dfc418982e7062d2453437 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:15:32.165008  256207 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 17:15:32.165038  256207 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 17:15:32.165059  256207 cache.go:233] Successfully downloaded all kic artifacts
	I1019 17:15:32.165104  256207 start.go:360] acquireMachinesLock for default-k8s-diff-port-663015: {Name:mkc3b977c4f353256fa3816417a52809b235a030 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 17:15:32.165230  256207 start.go:364] duration metric: took 101.636µs to acquireMachinesLock for "default-k8s-diff-port-663015"
	I1019 17:15:32.165263  256207 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-663015 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-663015 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 17:15:32.165347  256207 start.go:125] createHost starting for "" (driver="docker")
	W1019 17:15:28.955363  245899 pod_ready.go:104] pod "coredns-66bc5c9577-s4dxw" is not "Ready", error: <nil>
	W1019 17:15:31.454700  245899 pod_ready.go:104] pod "coredns-66bc5c9577-s4dxw" is not "Ready", error: <nil>
	I1019 17:15:28.755498  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:15:28.755944  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1019 17:15:28.756020  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1019 17:15:28.756098  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1019 17:15:28.788829  219832 cri.go:89] found id: "9069a5be85580c182da77b88270f9798850d1b399012f0273ac8911e2aca89dc"
	I1019 17:15:28.788858  219832 cri.go:89] found id: ""
	I1019 17:15:28.788868  219832 logs.go:282] 1 containers: [9069a5be85580c182da77b88270f9798850d1b399012f0273ac8911e2aca89dc]
	I1019 17:15:28.788930  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:15:28.793494  219832 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1019 17:15:28.793572  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1019 17:15:28.823750  219832 cri.go:89] found id: ""
	I1019 17:15:28.823778  219832 logs.go:282] 0 containers: []
	W1019 17:15:28.823788  219832 logs.go:284] No container was found matching "etcd"
	I1019 17:15:28.823795  219832 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1019 17:15:28.823851  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1019 17:15:28.856427  219832 cri.go:89] found id: ""
	I1019 17:15:28.856451  219832 logs.go:282] 0 containers: []
	W1019 17:15:28.856462  219832 logs.go:284] No container was found matching "coredns"
	I1019 17:15:28.856469  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1019 17:15:28.856525  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1019 17:15:28.888398  219832 cri.go:89] found id: "409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:15:28.888425  219832 cri.go:89] found id: ""
	I1019 17:15:28.888435  219832 logs.go:282] 1 containers: [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f]
	I1019 17:15:28.888494  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:15:28.892982  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1019 17:15:28.893058  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1019 17:15:28.922468  219832 cri.go:89] found id: ""
	I1019 17:15:28.922497  219832 logs.go:282] 0 containers: []
	W1019 17:15:28.922507  219832 logs.go:284] No container was found matching "kube-proxy"
	I1019 17:15:28.922517  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1019 17:15:28.922569  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1019 17:15:28.952452  219832 cri.go:89] found id: "fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa"
	I1019 17:15:28.952477  219832 cri.go:89] found id: ""
	I1019 17:15:28.952487  219832 logs.go:282] 1 containers: [fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa]
	I1019 17:15:28.952559  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:15:28.957289  219832 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1019 17:15:28.957360  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1019 17:15:28.986202  219832 cri.go:89] found id: ""
	I1019 17:15:28.986229  219832 logs.go:282] 0 containers: []
	W1019 17:15:28.986240  219832 logs.go:284] No container was found matching "kindnet"
	I1019 17:15:28.986247  219832 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1019 17:15:28.986302  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1019 17:15:29.015162  219832 cri.go:89] found id: ""
	I1019 17:15:29.015190  219832 logs.go:282] 0 containers: []
	W1019 17:15:29.015201  219832 logs.go:284] No container was found matching "storage-provisioner"
	I1019 17:15:29.015211  219832 logs.go:123] Gathering logs for describe nodes ...
	I1019 17:15:29.015225  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1019 17:15:29.091334  219832 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1019 17:15:29.091517  219832 logs.go:123] Gathering logs for kube-apiserver [9069a5be85580c182da77b88270f9798850d1b399012f0273ac8911e2aca89dc] ...
	I1019 17:15:29.091555  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9069a5be85580c182da77b88270f9798850d1b399012f0273ac8911e2aca89dc"
	I1019 17:15:29.135520  219832 logs.go:123] Gathering logs for kube-scheduler [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f] ...
	I1019 17:15:29.135583  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:15:29.207086  219832 logs.go:123] Gathering logs for kube-controller-manager [fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa] ...
	I1019 17:15:29.207124  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa"
	I1019 17:15:29.237568  219832 logs.go:123] Gathering logs for CRI-O ...
	I1019 17:15:29.237602  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1019 17:15:29.287158  219832 logs.go:123] Gathering logs for container status ...
	I1019 17:15:29.287199  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1019 17:15:29.323049  219832 logs.go:123] Gathering logs for kubelet ...
	I1019 17:15:29.323103  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1019 17:15:29.414864  219832 logs.go:123] Gathering logs for dmesg ...
	I1019 17:15:29.414904  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1019 17:15:31.932122  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:15:31.932542  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1019 17:15:31.932604  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1019 17:15:31.932662  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1019 17:15:31.970732  219832 cri.go:89] found id: "9069a5be85580c182da77b88270f9798850d1b399012f0273ac8911e2aca89dc"
	I1019 17:15:31.970755  219832 cri.go:89] found id: ""
	I1019 17:15:31.970763  219832 logs.go:282] 1 containers: [9069a5be85580c182da77b88270f9798850d1b399012f0273ac8911e2aca89dc]
	I1019 17:15:31.970819  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:15:31.979248  219832 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1019 17:15:31.979322  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1019 17:15:32.020265  219832 cri.go:89] found id: ""
	I1019 17:15:32.020295  219832 logs.go:282] 0 containers: []
	W1019 17:15:32.020306  219832 logs.go:284] No container was found matching "etcd"
	I1019 17:15:32.020313  219832 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1019 17:15:32.020376  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1019 17:15:32.056993  219832 cri.go:89] found id: ""
	I1019 17:15:32.057021  219832 logs.go:282] 0 containers: []
	W1019 17:15:32.057033  219832 logs.go:284] No container was found matching "coredns"
	I1019 17:15:32.057040  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1019 17:15:32.057113  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1019 17:15:32.100997  219832 cri.go:89] found id: "409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:15:32.101031  219832 cri.go:89] found id: ""
	I1019 17:15:32.101042  219832 logs.go:282] 1 containers: [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f]
	I1019 17:15:32.101119  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:15:32.107278  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1019 17:15:32.107391  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1019 17:15:32.145984  219832 cri.go:89] found id: ""
	I1019 17:15:32.146011  219832 logs.go:282] 0 containers: []
	W1019 17:15:32.146024  219832 logs.go:284] No container was found matching "kube-proxy"
	I1019 17:15:32.146031  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1019 17:15:32.146104  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1019 17:15:32.181273  219832 cri.go:89] found id: "fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa"
	I1019 17:15:32.181311  219832 cri.go:89] found id: ""
	I1019 17:15:32.181321  219832 logs.go:282] 1 containers: [fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa]
	I1019 17:15:32.181378  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:15:32.186566  219832 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1019 17:15:32.186638  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1019 17:15:32.221871  219832 cri.go:89] found id: ""
	I1019 17:15:32.221970  219832 logs.go:282] 0 containers: []
	W1019 17:15:32.221987  219832 logs.go:284] No container was found matching "kindnet"
	I1019 17:15:32.221995  219832 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1019 17:15:32.222111  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1019 17:15:32.259656  219832 cri.go:89] found id: ""
	I1019 17:15:32.259684  219832 logs.go:282] 0 containers: []
	W1019 17:15:32.259694  219832 logs.go:284] No container was found matching "storage-provisioner"
	I1019 17:15:32.259704  219832 logs.go:123] Gathering logs for kube-scheduler [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f] ...
	I1019 17:15:32.259719  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:15:32.342424  219832 logs.go:123] Gathering logs for kube-controller-manager [fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa] ...
	I1019 17:15:32.342470  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa"
	I1019 17:15:32.380735  219832 logs.go:123] Gathering logs for CRI-O ...
	I1019 17:15:32.380771  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1019 17:15:32.450369  219832 logs.go:123] Gathering logs for container status ...
	I1019 17:15:32.450453  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1019 17:15:32.494816  219832 logs.go:123] Gathering logs for kubelet ...
	I1019 17:15:32.494853  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1019 17:15:32.627839  219832 logs.go:123] Gathering logs for dmesg ...
	I1019 17:15:32.627892  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1019 17:15:32.645946  219832 logs.go:123] Gathering logs for describe nodes ...
	I1019 17:15:32.645983  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1019 17:15:32.733672  219832 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1019 17:15:32.733698  219832 logs.go:123] Gathering logs for kube-apiserver [9069a5be85580c182da77b88270f9798850d1b399012f0273ac8911e2aca89dc] ...
	I1019 17:15:32.733713  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9069a5be85580c182da77b88270f9798850d1b399012f0273ac8911e2aca89dc"
	I1019 17:15:30.116754  251026 out.go:252]   - Booting up control plane ...
	I1019 17:15:30.116862  251026 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1019 17:15:30.116950  251026 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1019 17:15:30.117072  251026 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1019 17:15:30.131991  251026 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1019 17:15:30.132237  251026 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1019 17:15:30.140050  251026 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1019 17:15:30.140219  251026 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1019 17:15:30.140283  251026 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1019 17:15:30.248100  251026 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1019 17:15:30.248285  251026 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1019 17:15:30.749992  251026 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 502.099967ms
	I1019 17:15:30.752881  251026 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1019 17:15:30.753025  251026 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1019 17:15:30.753141  251026 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1019 17:15:30.753208  251026 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1019 17:15:33.012669  251026 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.257250612s
	I1019 17:15:33.146473  251026 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.39358791s
	I1019 17:15:34.754665  251026 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001688751s
	I1019 17:15:34.767969  251026 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1019 17:15:34.780120  251026 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1019 17:15:34.790181  251026 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1019 17:15:34.790477  251026 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-090139 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1019 17:15:34.799881  251026 kubeadm.go:319] [bootstrap-token] Using token: 9zgr3w.nm3btzu7j71lm9u2
	I1019 17:15:34.801746  251026 out.go:252]   - Configuring RBAC rules ...
	I1019 17:15:34.801887  251026 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1019 17:15:34.804977  251026 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1019 17:15:34.811365  251026 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1019 17:15:34.814127  251026 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1019 17:15:34.818296  251026 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1019 17:15:34.821494  251026 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1019 17:15:35.161399  251026 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1019 17:15:35.988488  251026 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1019 17:15:36.858272  251026 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1019 17:15:36.859538  251026 kubeadm.go:319] 
	I1019 17:15:36.859687  251026 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1019 17:15:36.859714  251026 kubeadm.go:319] 
	I1019 17:15:36.859813  251026 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1019 17:15:36.859825  251026 kubeadm.go:319] 
	I1019 17:15:36.859857  251026 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1019 17:15:36.859951  251026 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1019 17:15:36.860021  251026 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1019 17:15:36.860030  251026 kubeadm.go:319] 
	I1019 17:15:36.860118  251026 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1019 17:15:36.860132  251026 kubeadm.go:319] 
	I1019 17:15:36.860194  251026 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1019 17:15:36.860199  251026 kubeadm.go:319] 
	I1019 17:15:36.860265  251026 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1019 17:15:36.860361  251026 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1019 17:15:36.860465  251026 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1019 17:15:36.860481  251026 kubeadm.go:319] 
	I1019 17:15:36.860600  251026 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1019 17:15:36.860726  251026 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1019 17:15:36.860747  251026 kubeadm.go:319] 
	I1019 17:15:36.860860  251026 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 9zgr3w.nm3btzu7j71lm9u2 \
	I1019 17:15:36.861036  251026 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:861d72ec119aad1188a44160bac431ea1f9fef8d67df56e962f86d9bbf64f1d3 \
	I1019 17:15:36.861085  251026 kubeadm.go:319] 	--control-plane 
	I1019 17:15:36.861096  251026 kubeadm.go:319] 
	I1019 17:15:36.861217  251026 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1019 17:15:36.861235  251026 kubeadm.go:319] 
	I1019 17:15:36.861338  251026 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 9zgr3w.nm3btzu7j71lm9u2 \
	I1019 17:15:36.861469  251026 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:861d72ec119aad1188a44160bac431ea1f9fef8d67df56e962f86d9bbf64f1d3 
	I1019 17:15:36.864686  251026 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1019 17:15:36.864846  251026 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1019 17:15:36.864878  251026 cni.go:84] Creating CNI manager for ""
	I1019 17:15:36.864890  251026 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:15:32.170272  256207 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1019 17:15:32.170554  256207 start.go:159] libmachine.API.Create for "default-k8s-diff-port-663015" (driver="docker")
	I1019 17:15:32.170616  256207 client.go:171] LocalClient.Create starting
	I1019 17:15:32.170743  256207 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem
	I1019 17:15:32.170792  256207 main.go:143] libmachine: Decoding PEM data...
	I1019 17:15:32.170817  256207 main.go:143] libmachine: Parsing certificate...
	I1019 17:15:32.170881  256207 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-3731/.minikube/certs/cert.pem
	I1019 17:15:32.170901  256207 main.go:143] libmachine: Decoding PEM data...
	I1019 17:15:32.170912  256207 main.go:143] libmachine: Parsing certificate...
	I1019 17:15:32.171340  256207 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-663015 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1019 17:15:32.194917  256207 cli_runner.go:211] docker network inspect default-k8s-diff-port-663015 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1019 17:15:32.194994  256207 network_create.go:284] running [docker network inspect default-k8s-diff-port-663015] to gather additional debugging logs...
	I1019 17:15:32.195018  256207 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-663015
	W1019 17:15:32.220045  256207 cli_runner.go:211] docker network inspect default-k8s-diff-port-663015 returned with exit code 1
	I1019 17:15:32.220124  256207 network_create.go:287] error running [docker network inspect default-k8s-diff-port-663015]: docker network inspect default-k8s-diff-port-663015: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-663015 not found
	I1019 17:15:32.220142  256207 network_create.go:289] output of [docker network inspect default-k8s-diff-port-663015]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-663015 not found
	
	** /stderr **
	I1019 17:15:32.220265  256207 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 17:15:32.245048  256207 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-96cf7041f267 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:7a:ea:91:e3:37:25} reservation:<nil>}
	I1019 17:15:32.246145  256207 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-0f2c415cfca9 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:96:f0:8a:e9:5f:de} reservation:<nil>}
	I1019 17:15:32.247275  256207 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ca739aebb768 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:a6:81:0d:b3:5e:ec} reservation:<nil>}
	I1019 17:15:32.248100  256207 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-73bac96357aa IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:fe:58:13:5a:d3:70} reservation:<nil>}
	I1019 17:15:32.249260  256207 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f94e30}
	I1019 17:15:32.249291  256207 network_create.go:124] attempt to create docker network default-k8s-diff-port-663015 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1019 17:15:32.249353  256207 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-663015 default-k8s-diff-port-663015
	I1019 17:15:32.327302  256207 network_create.go:108] docker network default-k8s-diff-port-663015 192.168.85.0/24 created
	I1019 17:15:32.327342  256207 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-663015" container
	I1019 17:15:32.327418  256207 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1019 17:15:32.350545  256207 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-663015 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-663015 --label created_by.minikube.sigs.k8s.io=true
	I1019 17:15:32.374502  256207 oci.go:103] Successfully created a docker volume default-k8s-diff-port-663015
	I1019 17:15:32.374587  256207 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-663015-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-663015 --entrypoint /usr/bin/test -v default-k8s-diff-port-663015:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1019 17:15:32.864578  256207 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-663015
	I1019 17:15:32.864634  256207 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:15:32.864661  256207 kic.go:194] Starting extracting preloaded images to volume ...
	I1019 17:15:32.864737  256207 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-3731/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-663015:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1019 17:15:37.003206  251026 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1019 17:15:33.454957  245899 pod_ready.go:104] pod "coredns-66bc5c9577-s4dxw" is not "Ready", error: <nil>
	W1019 17:15:35.953453  245899 pod_ready.go:104] pod "coredns-66bc5c9577-s4dxw" is not "Ready", error: <nil>
	I1019 17:15:36.453745  245899 pod_ready.go:94] pod "coredns-66bc5c9577-s4dxw" is "Ready"
	I1019 17:15:36.453785  245899 pod_ready.go:86] duration metric: took 32.505068187s for pod "coredns-66bc5c9577-s4dxw" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:15:36.456235  245899 pod_ready.go:83] waiting for pod "etcd-no-preload-806996" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:15:36.459937  245899 pod_ready.go:94] pod "etcd-no-preload-806996" is "Ready"
	I1019 17:15:36.459958  245899 pod_ready.go:86] duration metric: took 3.68272ms for pod "etcd-no-preload-806996" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:15:36.461849  245899 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-806996" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:15:36.465454  245899 pod_ready.go:94] pod "kube-apiserver-no-preload-806996" is "Ready"
	I1019 17:15:36.465477  245899 pod_ready.go:86] duration metric: took 3.608848ms for pod "kube-apiserver-no-preload-806996" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:15:36.467204  245899 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-806996" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:15:36.651960  245899 pod_ready.go:94] pod "kube-controller-manager-no-preload-806996" is "Ready"
	I1019 17:15:36.651986  245899 pod_ready.go:86] duration metric: took 184.763994ms for pod "kube-controller-manager-no-preload-806996" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:15:36.852203  245899 pod_ready.go:83] waiting for pod "kube-proxy-76f5v" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:15:37.253368  245899 pod_ready.go:94] pod "kube-proxy-76f5v" is "Ready"
	I1019 17:15:37.253421  245899 pod_ready.go:86] duration metric: took 401.192762ms for pod "kube-proxy-76f5v" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:15:37.452538  245899 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-806996" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:15:37.852685  245899 pod_ready.go:94] pod "kube-scheduler-no-preload-806996" is "Ready"
	I1019 17:15:37.852710  245899 pod_ready.go:86] duration metric: took 400.146919ms for pod "kube-scheduler-no-preload-806996" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:15:37.852722  245899 pod_ready.go:40] duration metric: took 33.908905676s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 17:15:37.908548  245899 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1019 17:15:37.910333  245899 out.go:179] * Done! kubectl is now configured to use "no-preload-806996" cluster and "default" namespace by default
	I1019 17:15:35.275419  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:15:35.275887  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1019 17:15:35.275958  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1019 17:15:35.276016  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1019 17:15:35.307678  219832 cri.go:89] found id: "9069a5be85580c182da77b88270f9798850d1b399012f0273ac8911e2aca89dc"
	I1019 17:15:35.307700  219832 cri.go:89] found id: ""
	I1019 17:15:35.307708  219832 logs.go:282] 1 containers: [9069a5be85580c182da77b88270f9798850d1b399012f0273ac8911e2aca89dc]
	I1019 17:15:35.307753  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:15:35.312853  219832 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1019 17:15:35.312928  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1019 17:15:35.346037  219832 cri.go:89] found id: ""
	I1019 17:15:35.346092  219832 logs.go:282] 0 containers: []
	W1019 17:15:35.346104  219832 logs.go:284] No container was found matching "etcd"
	I1019 17:15:35.346111  219832 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1019 17:15:35.346165  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1019 17:15:35.378630  219832 cri.go:89] found id: ""
	I1019 17:15:35.378662  219832 logs.go:282] 0 containers: []
	W1019 17:15:35.378673  219832 logs.go:284] No container was found matching "coredns"
	I1019 17:15:35.378680  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1019 17:15:35.378735  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1019 17:15:35.413360  219832 cri.go:89] found id: "409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:15:35.413387  219832 cri.go:89] found id: ""
	I1019 17:15:35.413399  219832 logs.go:282] 1 containers: [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f]
	I1019 17:15:35.413457  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:15:35.418870  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1019 17:15:35.419162  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1019 17:15:35.456695  219832 cri.go:89] found id: ""
	I1019 17:15:35.456724  219832 logs.go:282] 0 containers: []
	W1019 17:15:35.456734  219832 logs.go:284] No container was found matching "kube-proxy"
	I1019 17:15:35.456742  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1019 17:15:35.456796  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1019 17:15:35.488034  219832 cri.go:89] found id: "fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa"
	I1019 17:15:35.488058  219832 cri.go:89] found id: ""
	I1019 17:15:35.488080  219832 logs.go:282] 1 containers: [fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa]
	I1019 17:15:35.488134  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:15:35.492575  219832 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1019 17:15:35.492635  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1019 17:15:35.519809  219832 cri.go:89] found id: ""
	I1019 17:15:35.519831  219832 logs.go:282] 0 containers: []
	W1019 17:15:35.519839  219832 logs.go:284] No container was found matching "kindnet"
	I1019 17:15:35.519844  219832 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1019 17:15:35.519890  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1019 17:15:35.549606  219832 cri.go:89] found id: ""
	I1019 17:15:35.549630  219832 logs.go:282] 0 containers: []
	W1019 17:15:35.549638  219832 logs.go:284] No container was found matching "storage-provisioner"
	I1019 17:15:35.549646  219832 logs.go:123] Gathering logs for kubelet ...
	I1019 17:15:35.549657  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1019 17:15:35.643806  219832 logs.go:123] Gathering logs for dmesg ...
	I1019 17:15:35.643849  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1019 17:15:35.659863  219832 logs.go:123] Gathering logs for describe nodes ...
	I1019 17:15:35.659899  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1019 17:15:35.717571  219832 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1019 17:15:35.717599  219832 logs.go:123] Gathering logs for kube-apiserver [9069a5be85580c182da77b88270f9798850d1b399012f0273ac8911e2aca89dc] ...
	I1019 17:15:35.717615  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9069a5be85580c182da77b88270f9798850d1b399012f0273ac8911e2aca89dc"
	I1019 17:15:35.750362  219832 logs.go:123] Gathering logs for kube-scheduler [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f] ...
	I1019 17:15:35.750394  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:15:35.812487  219832 logs.go:123] Gathering logs for kube-controller-manager [fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa] ...
	I1019 17:15:35.812520  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa"
	I1019 17:15:35.838694  219832 logs.go:123] Gathering logs for CRI-O ...
	I1019 17:15:35.838719  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1019 17:15:35.888325  219832 logs.go:123] Gathering logs for container status ...
	I1019 17:15:35.888359  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1019 17:15:38.421141  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:15:37.164230  251026 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1019 17:15:37.170103  251026 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1019 17:15:37.170128  251026 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1019 17:15:37.184684  251026 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1019 17:15:37.657163  251026 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1019 17:15:37.657340  251026 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:15:37.657439  251026 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-090139 minikube.k8s.io/updated_at=2025_10_19T17_15_37_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34 minikube.k8s.io/name=embed-certs-090139 minikube.k8s.io/primary=true
	I1019 17:15:37.669594  251026 ops.go:34] apiserver oom_adj: -16
	I1019 17:15:37.745746  251026 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:15:38.246757  251026 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:15:38.746298  251026 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:15:39.246272  251026 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:15:37.572436  256207 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-3731/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-663015:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.707618201s)
	I1019 17:15:37.572480  256207 kic.go:203] duration metric: took 4.707815653s to extract preloaded images to volume ...
	W1019 17:15:37.572587  256207 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1019 17:15:37.572638  256207 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1019 17:15:37.572749  256207 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1019 17:15:37.640259  256207 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-663015 --name default-k8s-diff-port-663015 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-663015 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-663015 --network default-k8s-diff-port-663015 --ip 192.168.85.2 --volume default-k8s-diff-port-663015:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1019 17:15:37.976856  256207 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-663015 --format={{.State.Running}}
	I1019 17:15:38.001144  256207 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-663015 --format={{.State.Status}}
	I1019 17:15:38.024268  256207 cli_runner.go:164] Run: docker exec default-k8s-diff-port-663015 stat /var/lib/dpkg/alternatives/iptables
	I1019 17:15:38.075095  256207 oci.go:144] the created container "default-k8s-diff-port-663015" has a running status.
	I1019 17:15:38.075129  256207 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-3731/.minikube/machines/default-k8s-diff-port-663015/id_rsa...
	I1019 17:15:38.375731  256207 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-3731/.minikube/machines/default-k8s-diff-port-663015/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1019 17:15:38.406364  256207 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-663015 --format={{.State.Status}}
	I1019 17:15:38.427531  256207 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1019 17:15:38.427554  256207 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-663015 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1019 17:15:38.478828  256207 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-663015 --format={{.State.Status}}
	I1019 17:15:38.498417  256207 machine.go:94] provisionDockerMachine start ...
	I1019 17:15:38.498538  256207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-663015
	I1019 17:15:38.517560  256207 main.go:143] libmachine: Using SSH client type: native
	I1019 17:15:38.517801  256207 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33079 <nil> <nil>}
	I1019 17:15:38.517813  256207 main.go:143] libmachine: About to run SSH command:
	hostname
	I1019 17:15:38.654113  256207 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-663015
	
	I1019 17:15:38.654142  256207 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-663015"
	I1019 17:15:38.654206  256207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-663015
	I1019 17:15:38.675521  256207 main.go:143] libmachine: Using SSH client type: native
	I1019 17:15:38.675839  256207 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33079 <nil> <nil>}
	I1019 17:15:38.675862  256207 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-663015 && echo "default-k8s-diff-port-663015" | sudo tee /etc/hostname
	I1019 17:15:38.825226  256207 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-663015
	
	I1019 17:15:38.825289  256207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-663015
	I1019 17:15:38.843802  256207 main.go:143] libmachine: Using SSH client type: native
	I1019 17:15:38.844010  256207 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33079 <nil> <nil>}
	I1019 17:15:38.844030  256207 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-663015' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-663015/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-663015' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 17:15:38.979170  256207 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1019 17:15:38.979205  256207 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-3731/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-3731/.minikube}
	I1019 17:15:38.979244  256207 ubuntu.go:190] setting up certificates
	I1019 17:15:38.979256  256207 provision.go:84] configureAuth start
	I1019 17:15:38.979315  256207 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-663015
	I1019 17:15:38.997141  256207 provision.go:143] copyHostCerts
	I1019 17:15:38.997221  256207 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-3731/.minikube/ca.pem, removing ...
	I1019 17:15:38.997236  256207 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-3731/.minikube/ca.pem
	I1019 17:15:38.997309  256207 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-3731/.minikube/ca.pem (1082 bytes)
	I1019 17:15:38.997392  256207 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-3731/.minikube/cert.pem, removing ...
	I1019 17:15:38.997401  256207 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-3731/.minikube/cert.pem
	I1019 17:15:38.997428  256207 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-3731/.minikube/cert.pem (1123 bytes)
	I1019 17:15:38.997483  256207 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-3731/.minikube/key.pem, removing ...
	I1019 17:15:38.997490  256207 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-3731/.minikube/key.pem
	I1019 17:15:38.997520  256207 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-3731/.minikube/key.pem (1679 bytes)
	I1019 17:15:38.997569  256207 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-3731/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-663015 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-663015 localhost minikube]
	I1019 17:15:39.115013  256207 provision.go:177] copyRemoteCerts
	I1019 17:15:39.115087  256207 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 17:15:39.115123  256207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-663015
	I1019 17:15:39.134554  256207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33079 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/default-k8s-diff-port-663015/id_rsa Username:docker}
	I1019 17:15:39.232716  256207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 17:15:39.253526  256207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1019 17:15:39.273304  256207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 17:15:39.292760  256207 provision.go:87] duration metric: took 313.488795ms to configureAuth
	I1019 17:15:39.292789  256207 ubuntu.go:206] setting minikube options for container-runtime
	I1019 17:15:39.292957  256207 config.go:182] Loaded profile config "default-k8s-diff-port-663015": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:15:39.293056  256207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-663015
	I1019 17:15:39.314595  256207 main.go:143] libmachine: Using SSH client type: native
	I1019 17:15:39.314929  256207 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33079 <nil> <nil>}
	I1019 17:15:39.314960  256207 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 17:15:39.565321  256207 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 17:15:39.565346  256207 machine.go:97] duration metric: took 1.06690265s to provisionDockerMachine
	I1019 17:15:39.565359  256207 client.go:174] duration metric: took 7.394730229s to LocalClient.Create
	I1019 17:15:39.565373  256207 start.go:167] duration metric: took 7.394822286s to libmachine.API.Create "default-k8s-diff-port-663015"
	I1019 17:15:39.565382  256207 start.go:293] postStartSetup for "default-k8s-diff-port-663015" (driver="docker")
	I1019 17:15:39.565395  256207 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 17:15:39.565457  256207 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 17:15:39.565504  256207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-663015
	I1019 17:15:39.587442  256207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33079 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/default-k8s-diff-port-663015/id_rsa Username:docker}
	I1019 17:15:39.687547  256207 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 17:15:39.691397  256207 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 17:15:39.691424  256207 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 17:15:39.691435  256207 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-3731/.minikube/addons for local assets ...
	I1019 17:15:39.691486  256207 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-3731/.minikube/files for local assets ...
	I1019 17:15:39.691569  256207 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-3731/.minikube/files/etc/ssl/certs/72282.pem -> 72282.pem in /etc/ssl/certs
	I1019 17:15:39.691660  256207 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 17:15:39.699588  256207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/files/etc/ssl/certs/72282.pem --> /etc/ssl/certs/72282.pem (1708 bytes)
	I1019 17:15:39.721659  256207 start.go:296] duration metric: took 156.259902ms for postStartSetup
	I1019 17:15:39.722148  256207 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-663015
	I1019 17:15:39.740993  256207 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/config.json ...
	I1019 17:15:39.741315  256207 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 17:15:39.741365  256207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-663015
	I1019 17:15:39.760154  256207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33079 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/default-k8s-diff-port-663015/id_rsa Username:docker}
	I1019 17:15:39.858550  256207 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 17:15:39.863203  256207 start.go:128] duration metric: took 7.697842458s to createHost
	I1019 17:15:39.863230  256207 start.go:83] releasing machines lock for "default-k8s-diff-port-663015", held for 7.697984929s
	I1019 17:15:39.863301  256207 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-663015
	I1019 17:15:39.882426  256207 ssh_runner.go:195] Run: cat /version.json
	I1019 17:15:39.882473  256207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-663015
	I1019 17:15:39.882519  256207 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 17:15:39.882586  256207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-663015
	I1019 17:15:39.901461  256207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33079 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/default-k8s-diff-port-663015/id_rsa Username:docker}
	I1019 17:15:39.902472  256207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33079 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/default-k8s-diff-port-663015/id_rsa Username:docker}
	I1019 17:15:40.066790  256207 ssh_runner.go:195] Run: systemctl --version
	I1019 17:15:40.074104  256207 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 17:15:40.113529  256207 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 17:15:40.118448  256207 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 17:15:40.118515  256207 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 17:15:40.147295  256207 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1019 17:15:40.147323  256207 start.go:496] detecting cgroup driver to use...
	I1019 17:15:40.147353  256207 detect.go:190] detected "systemd" cgroup driver on host os
	I1019 17:15:40.147390  256207 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 17:15:40.164307  256207 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 17:15:40.178315  256207 docker.go:218] disabling cri-docker service (if available) ...
	I1019 17:15:40.178386  256207 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 17:15:40.198676  256207 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 17:15:40.216674  256207 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 17:15:40.304996  256207 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 17:15:40.396168  256207 docker.go:234] disabling docker service ...
	I1019 17:15:40.396238  256207 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 17:15:40.415926  256207 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 17:15:40.429962  256207 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 17:15:40.517635  256207 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 17:15:40.605312  256207 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 17:15:40.620142  256207 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 17:15:40.635325  256207 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 17:15:40.635377  256207 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:15:40.646022  256207 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1019 17:15:40.646091  256207 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:15:40.655485  256207 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:15:40.664440  256207 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:15:40.673866  256207 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 17:15:40.682533  256207 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:15:40.691463  256207 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:15:40.706131  256207 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:15:40.715518  256207 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 17:15:40.723727  256207 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 17:15:40.731898  256207 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:15:40.817545  256207 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 17:15:40.931737  256207 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 17:15:40.931830  256207 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 17:15:40.936504  256207 start.go:564] Will wait 60s for crictl version
	I1019 17:15:40.936568  256207 ssh_runner.go:195] Run: which crictl
	I1019 17:15:40.941086  256207 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 17:15:40.965153  256207 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 17:15:40.965238  256207 ssh_runner.go:195] Run: crio --version
	I1019 17:15:40.995614  256207 ssh_runner.go:195] Run: crio --version
	I1019 17:15:41.025460  256207 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1019 17:15:39.746530  251026 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:15:40.246220  251026 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:15:40.746245  251026 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:15:41.245838  251026 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:15:41.324153  251026 kubeadm.go:1114] duration metric: took 3.666852468s to wait for elevateKubeSystemPrivileges
	I1019 17:15:41.324196  251026 kubeadm.go:403] duration metric: took 16.35096448s to StartCluster
	I1019 17:15:41.324218  251026 settings.go:142] acquiring lock: {Name:mk205bd8d663b4a1850184d0f589700c0d2429b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:15:41.324284  251026 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-3731/kubeconfig
	I1019 17:15:41.325758  251026 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/kubeconfig: {Name:mk4cdfafbafeae33568865a60cf929c656d55c5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:15:41.326006  251026 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 17:15:41.326031  251026 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 17:15:41.326149  251026 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-090139"
	I1019 17:15:41.326020  251026 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1019 17:15:41.326185  251026 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-090139"
	I1019 17:15:41.326230  251026 config.go:182] Loaded profile config "embed-certs-090139": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:15:41.326230  251026 addons.go:70] Setting default-storageclass=true in profile "embed-certs-090139"
	I1019 17:15:41.326235  251026 host.go:66] Checking if "embed-certs-090139" exists ...
	I1019 17:15:41.326277  251026 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-090139"
	I1019 17:15:41.326790  251026 cli_runner.go:164] Run: docker container inspect embed-certs-090139 --format={{.State.Status}}
	I1019 17:15:41.326853  251026 cli_runner.go:164] Run: docker container inspect embed-certs-090139 --format={{.State.Status}}
	I1019 17:15:41.328042  251026 out.go:179] * Verifying Kubernetes components...
	I1019 17:15:41.332640  251026 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:15:41.354584  251026 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 17:15:41.357694  251026 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:15:41.357719  251026 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 17:15:41.357780  251026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-090139
	I1019 17:15:41.358779  251026 addons.go:239] Setting addon default-storageclass=true in "embed-certs-090139"
	I1019 17:15:41.358878  251026 host.go:66] Checking if "embed-certs-090139" exists ...
	I1019 17:15:41.359802  251026 cli_runner.go:164] Run: docker container inspect embed-certs-090139 --format={{.State.Status}}
	I1019 17:15:41.387971  251026 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 17:15:41.388096  251026 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 17:15:41.388221  251026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-090139
	I1019 17:15:41.394302  251026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/embed-certs-090139/id_rsa Username:docker}
	I1019 17:15:41.414598  251026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/embed-certs-090139/id_rsa Username:docker}
	I1019 17:15:41.440465  251026 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1019 17:15:41.496914  251026 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:15:41.524297  251026 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:15:41.548047  251026 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 17:15:41.653010  251026 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1019 17:15:41.654727  251026 node_ready.go:35] waiting up to 6m0s for node "embed-certs-090139" to be "Ready" ...
	I1019 17:15:41.846223  251026 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1019 17:15:41.026574  256207 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-663015 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 17:15:41.045566  256207 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1019 17:15:41.050440  256207 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:15:41.061004  256207 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-663015 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-663015 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 17:15:41.061140  256207 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:15:41.061184  256207 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:15:41.095482  256207 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:15:41.095509  256207 crio.go:433] Images already preloaded, skipping extraction
	I1019 17:15:41.095561  256207 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:15:41.123554  256207 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:15:41.123580  256207 cache_images.go:86] Images are preloaded, skipping loading
	I1019 17:15:41.123589  256207 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1019 17:15:41.123667  256207 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-663015 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-663015 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 17:15:41.123755  256207 ssh_runner.go:195] Run: crio config
	I1019 17:15:41.180314  256207 cni.go:84] Creating CNI manager for ""
	I1019 17:15:41.180335  256207 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:15:41.180352  256207 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 17:15:41.180374  256207 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-663015 NodeName:default-k8s-diff-port-663015 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 17:15:41.180496  256207 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-663015"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 17:15:41.180552  256207 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 17:15:41.189161  256207 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 17:15:41.189228  256207 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 17:15:41.197116  256207 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1019 17:15:41.210084  256207 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 17:15:41.226436  256207 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1019 17:15:41.241200  256207 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1019 17:15:41.245461  256207 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:15:41.256758  256207 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:15:41.359061  256207 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:15:41.402304  256207 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015 for IP: 192.168.85.2
	I1019 17:15:41.402328  256207 certs.go:195] generating shared ca certs ...
	I1019 17:15:41.402347  256207 certs.go:227] acquiring lock for ca certs: {Name:mk4c4a2dfd94a54e3626e99fce4b6f5183eeaf4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:15:41.402497  256207 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-3731/.minikube/ca.key
	I1019 17:15:41.402571  256207 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-3731/.minikube/proxy-client-ca.key
	I1019 17:15:41.402587  256207 certs.go:257] generating profile certs ...
	I1019 17:15:41.402658  256207 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/client.key
	I1019 17:15:41.402821  256207 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/client.crt with IP's: []
	I1019 17:15:41.634999  256207 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/client.crt ...
	I1019 17:15:41.635025  256207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/client.crt: {Name:mka0500442723f4230e6b879df857ac40daca047 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:15:41.635231  256207 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/client.key ...
	I1019 17:15:41.635245  256207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/client.key: {Name:mk43309fea32c11e9d1f599c181892c4b5610699 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:15:41.635361  256207 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/apiserver.key.d3e891db
	I1019 17:15:41.635375  256207 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/apiserver.crt.d3e891db with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1019 17:15:43.422477  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1019 17:15:43.422536  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1019 17:15:43.422595  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1019 17:15:43.453022  219832 cri.go:89] found id: "55181ab12a75f0828e8a499b545960e5dfe82c07024572a8d9b4566b0da1fba4"
	I1019 17:15:43.453047  219832 cri.go:89] found id: "9069a5be85580c182da77b88270f9798850d1b399012f0273ac8911e2aca89dc"
	I1019 17:15:43.453052  219832 cri.go:89] found id: ""
	I1019 17:15:43.453061  219832 logs.go:282] 2 containers: [55181ab12a75f0828e8a499b545960e5dfe82c07024572a8d9b4566b0da1fba4 9069a5be85580c182da77b88270f9798850d1b399012f0273ac8911e2aca89dc]
	I1019 17:15:43.453197  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:15:43.458100  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:15:43.462909  219832 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1019 17:15:43.462978  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1019 17:15:43.505928  219832 cri.go:89] found id: ""
	I1019 17:15:43.505962  219832 logs.go:282] 0 containers: []
	W1019 17:15:43.505972  219832 logs.go:284] No container was found matching "etcd"
	I1019 17:15:43.505979  219832 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1019 17:15:43.506052  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1019 17:15:42.209858  256207 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/apiserver.crt.d3e891db ...
	I1019 17:15:42.209884  256207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/apiserver.crt.d3e891db: {Name:mkfa7a703df391bd931b2cedfca2d3a4614585df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:15:42.210046  256207 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/apiserver.key.d3e891db ...
	I1019 17:15:42.210058  256207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/apiserver.key.d3e891db: {Name:mk5a4912e2b7a54fbc36d39103af69e291ffd333 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:15:42.210174  256207 certs.go:382] copying /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/apiserver.crt.d3e891db -> /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/apiserver.crt
	I1019 17:15:42.210256  256207 certs.go:386] copying /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/apiserver.key.d3e891db -> /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/apiserver.key
	I1019 17:15:42.210313  256207 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/proxy-client.key
	I1019 17:15:42.210329  256207 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/proxy-client.crt with IP's: []
	I1019 17:15:43.437025  256207 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/proxy-client.crt ...
	I1019 17:15:43.437056  256207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/proxy-client.crt: {Name:mk853093f2a301d2ed2f91679f038f64b5d184c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:15:43.437241  256207 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/proxy-client.key ...
	I1019 17:15:43.437258  256207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/proxy-client.key: {Name:mk6993b74fbbc0917420597a5a89aa15195ac013 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:15:43.437486  256207 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/7228.pem (1338 bytes)
	W1019 17:15:43.437541  256207 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-3731/.minikube/certs/7228_empty.pem, impossibly tiny 0 bytes
	I1019 17:15:43.437557  256207 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca-key.pem (1675 bytes)
	I1019 17:15:43.437596  256207 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem (1082 bytes)
	I1019 17:15:43.437630  256207 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/cert.pem (1123 bytes)
	I1019 17:15:43.437665  256207 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/key.pem (1679 bytes)
	I1019 17:15:43.437723  256207 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/files/etc/ssl/certs/72282.pem (1708 bytes)
	I1019 17:15:43.438520  256207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 17:15:43.461622  256207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1019 17:15:43.484190  256207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 17:15:43.516604  256207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1019 17:15:43.536832  256207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1019 17:15:43.555288  256207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1019 17:15:43.575301  256207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 17:15:43.596752  256207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1019 17:15:43.616537  256207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 17:15:43.638813  256207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/certs/7228.pem --> /usr/share/ca-certificates/7228.pem (1338 bytes)
	I1019 17:15:43.658945  256207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/files/etc/ssl/certs/72282.pem --> /usr/share/ca-certificates/72282.pem (1708 bytes)
	I1019 17:15:43.678498  256207 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 17:15:43.694833  256207 ssh_runner.go:195] Run: openssl version
	I1019 17:15:43.702262  256207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 17:15:43.712580  256207 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:15:43.717141  256207 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 16:21 /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:15:43.717213  256207 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:15:43.756376  256207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 17:15:43.766487  256207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7228.pem && ln -fs /usr/share/ca-certificates/7228.pem /etc/ssl/certs/7228.pem"
	I1019 17:15:43.775841  256207 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7228.pem
	I1019 17:15:43.779900  256207 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 16:26 /usr/share/ca-certificates/7228.pem
	I1019 17:15:43.779983  256207 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7228.pem
	I1019 17:15:43.815825  256207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7228.pem /etc/ssl/certs/51391683.0"
	I1019 17:15:43.824736  256207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/72282.pem && ln -fs /usr/share/ca-certificates/72282.pem /etc/ssl/certs/72282.pem"
	I1019 17:15:43.833598  256207 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/72282.pem
	I1019 17:15:43.837656  256207 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 16:26 /usr/share/ca-certificates/72282.pem
	I1019 17:15:43.837743  256207 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/72282.pem
	I1019 17:15:43.873845  256207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/72282.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 17:15:43.883853  256207 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 17:15:43.887820  256207 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1019 17:15:43.887885  256207 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-663015 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-663015 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:15:43.887968  256207 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 17:15:43.888016  256207 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 17:15:43.916830  256207 cri.go:89] found id: ""
	I1019 17:15:43.916906  256207 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 17:15:43.925402  256207 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1019 17:15:43.933710  256207 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1019 17:15:43.933764  256207 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1019 17:15:43.942092  256207 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1019 17:15:43.942112  256207 kubeadm.go:158] found existing configuration files:
	
	I1019 17:15:43.942164  256207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1019 17:15:43.950196  256207 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1019 17:15:43.950247  256207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1019 17:15:43.957888  256207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1019 17:15:43.965641  256207 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1019 17:15:43.965707  256207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1019 17:15:43.973279  256207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1019 17:15:43.981825  256207 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1019 17:15:43.981890  256207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1019 17:15:43.989801  256207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1019 17:15:43.997723  256207 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1019 17:15:43.997781  256207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1019 17:15:44.005560  256207 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1019 17:15:44.043217  256207 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1019 17:15:44.043297  256207 kubeadm.go:319] [preflight] Running pre-flight checks
	I1019 17:15:44.064754  256207 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1019 17:15:44.064864  256207 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1041-gcp
	I1019 17:15:44.064922  256207 kubeadm.go:319] OS: Linux
	I1019 17:15:44.065005  256207 kubeadm.go:319] CGROUPS_CPU: enabled
	I1019 17:15:44.065104  256207 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1019 17:15:44.065188  256207 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1019 17:15:44.065346  256207 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1019 17:15:44.065432  256207 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1019 17:15:44.065502  256207 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1019 17:15:44.065590  256207 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1019 17:15:44.065679  256207 kubeadm.go:319] CGROUPS_IO: enabled
	I1019 17:15:44.125877  256207 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1019 17:15:44.126034  256207 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1019 17:15:44.126157  256207 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1019 17:15:44.134504  256207 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1019 17:15:41.847425  251026 addons.go:515] duration metric: took 521.387494ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1019 17:15:42.157918  251026 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-090139" context rescaled to 1 replicas
	W1019 17:15:43.658171  251026 node_ready.go:57] node "embed-certs-090139" has "Ready":"False" status (will retry)
	I1019 17:15:44.137441  256207 out.go:252]   - Generating certificates and keys ...
	I1019 17:15:44.137515  256207 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1019 17:15:44.137580  256207 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1019 17:15:44.744872  256207 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1019 17:15:44.942133  256207 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1019 17:15:45.141425  256207 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1019 17:15:45.219605  256207 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1019 17:15:45.420047  256207 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1019 17:15:45.420219  256207 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-663015 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1019 17:15:45.657023  256207 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1019 17:15:45.657207  256207 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-663015 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1019 17:15:45.737294  256207 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1019 17:15:45.908211  256207 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1019 17:15:46.348591  256207 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1019 17:15:46.348696  256207 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1019 17:15:46.437698  256207 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1019 17:15:46.536617  256207 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1019 17:15:46.563465  256207 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1019 17:15:47.055139  256207 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1019 17:15:47.373764  256207 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1019 17:15:47.374284  256207 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1019 17:15:47.378449  256207 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1019 17:15:43.534868  219832 cri.go:89] found id: ""
	I1019 17:15:43.534899  219832 logs.go:282] 0 containers: []
	W1019 17:15:43.534912  219832 logs.go:284] No container was found matching "coredns"
	I1019 17:15:43.534920  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1019 17:15:43.534974  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1019 17:15:43.564642  219832 cri.go:89] found id: "409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:15:43.564669  219832 cri.go:89] found id: ""
	I1019 17:15:43.564680  219832 logs.go:282] 1 containers: [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f]
	I1019 17:15:43.564734  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:15:43.568925  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1019 17:15:43.569005  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1019 17:15:43.597841  219832 cri.go:89] found id: ""
	I1019 17:15:43.597866  219832 logs.go:282] 0 containers: []
	W1019 17:15:43.597875  219832 logs.go:284] No container was found matching "kube-proxy"
	I1019 17:15:43.597881  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1019 17:15:43.597934  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1019 17:15:43.627780  219832 cri.go:89] found id: "fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa"
	I1019 17:15:43.627811  219832 cri.go:89] found id: ""
	I1019 17:15:43.627822  219832 logs.go:282] 1 containers: [fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa]
	I1019 17:15:43.627878  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:15:43.631730  219832 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1019 17:15:43.631785  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1019 17:15:43.662084  219832 cri.go:89] found id: ""
	I1019 17:15:43.662111  219832 logs.go:282] 0 containers: []
	W1019 17:15:43.662122  219832 logs.go:284] No container was found matching "kindnet"
	I1019 17:15:43.662129  219832 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1019 17:15:43.662187  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1019 17:15:43.692768  219832 cri.go:89] found id: ""
	I1019 17:15:43.692802  219832 logs.go:282] 0 containers: []
	W1019 17:15:43.692814  219832 logs.go:284] No container was found matching "storage-provisioner"
	I1019 17:15:43.692832  219832 logs.go:123] Gathering logs for dmesg ...
	I1019 17:15:43.692845  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1019 17:15:43.709666  219832 logs.go:123] Gathering logs for describe nodes ...
	I1019 17:15:43.709700  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1019 17:15:46.158385  251026 node_ready.go:57] node "embed-certs-090139" has "Ready":"False" status (will retry)
	W1019 17:15:48.658027  251026 node_ready.go:57] node "embed-certs-090139" has "Ready":"False" status (will retry)
	I1019 17:15:47.379914  256207 out.go:252]   - Booting up control plane ...
	I1019 17:15:47.380054  256207 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1019 17:15:47.380149  256207 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1019 17:15:47.381527  256207 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1019 17:15:47.410678  256207 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1019 17:15:47.410879  256207 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1019 17:15:47.418293  256207 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1019 17:15:47.418443  256207 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1019 17:15:47.418525  256207 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1019 17:15:47.523529  256207 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1019 17:15:47.523666  256207 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1019 17:15:48.025432  256207 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 502.015996ms
	I1019 17:15:48.028488  256207 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1019 17:15:48.028611  256207 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1019 17:15:48.028746  256207 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1019 17:15:48.028854  256207 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1019 17:15:49.372406  256207 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.343795411s
	I1019 17:15:50.551580  256207 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.523033306s
	I1019 17:15:52.530456  256207 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501812486s
	I1019 17:15:52.544703  256207 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1019 17:15:52.560898  256207 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1019 17:15:52.573543  256207 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1019 17:15:52.573830  256207 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-663015 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1019 17:15:52.583809  256207 kubeadm.go:319] [bootstrap-token] Using token: u7ioim.4c2n584sgxnxrmli
	I1019 17:15:52.585689  256207 out.go:252]   - Configuring RBAC rules ...
	I1019 17:15:52.585848  256207 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1019 17:15:52.592876  256207 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1019 17:15:52.603593  256207 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1019 17:15:52.606678  256207 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1019 17:15:52.609741  256207 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1019 17:15:52.612629  256207 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1019 17:15:52.936779  256207 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1019 17:15:53.356056  256207 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1019 17:15:53.938412  256207 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1019 17:15:53.938829  256207 kubeadm.go:319] 
	I1019 17:15:53.938941  256207 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1019 17:15:53.938952  256207 kubeadm.go:319] 
	I1019 17:15:53.939036  256207 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1019 17:15:53.939045  256207 kubeadm.go:319] 
	I1019 17:15:53.939094  256207 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1019 17:15:53.939162  256207 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1019 17:15:53.939205  256207 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1019 17:15:53.939216  256207 kubeadm.go:319] 
	I1019 17:15:53.939260  256207 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1019 17:15:53.939264  256207 kubeadm.go:319] 
	I1019 17:15:53.939303  256207 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1019 17:15:53.939308  256207 kubeadm.go:319] 
	I1019 17:15:53.939372  256207 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1019 17:15:53.939434  256207 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1019 17:15:53.939500  256207 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1019 17:15:53.939506  256207 kubeadm.go:319] 
	I1019 17:15:53.939700  256207 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1019 17:15:53.939965  256207 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1019 17:15:53.939995  256207 kubeadm.go:319] 
	I1019 17:15:53.940186  256207 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token u7ioim.4c2n584sgxnxrmli \
	I1019 17:15:53.940315  256207 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:861d72ec119aad1188a44160bac431ea1f9fef8d67df56e962f86d9bbf64f1d3 \
	I1019 17:15:53.940346  256207 kubeadm.go:319] 	--control-plane 
	I1019 17:15:53.940352  256207 kubeadm.go:319] 
	I1019 17:15:53.940465  256207 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1019 17:15:53.940472  256207 kubeadm.go:319] 
	I1019 17:15:53.940562  256207 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token u7ioim.4c2n584sgxnxrmli \
	I1019 17:15:53.940716  256207 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:861d72ec119aad1188a44160bac431ea1f9fef8d67df56e962f86d9bbf64f1d3 
	I1019 17:15:53.943710  256207 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1019 17:15:53.943827  256207 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1019 17:15:53.943859  256207 cni.go:84] Creating CNI manager for ""
	I1019 17:15:53.943872  256207 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:15:53.945659  256207 out.go:179] * Configuring CNI (Container Networking Interface) ...
	
	
	==> CRI-O <==
	Oct 19 17:15:13 no-preload-806996 crio[559]: time="2025-10-19T17:15:13.693284435Z" level=info msg="Started container" PID=1733 containerID=61cf171e18d3bf6d01a701feda86b01ba9fff473f71c9aef818cad9a7381ce82 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s96d5/dashboard-metrics-scraper id=9cf70d60-a4dc-4487-a41e-40bdfdb602a6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=45b9c7496b54b527258d3f06df78dc1260eaa53cb47dff8ab74c4b8b47970d0a
	Oct 19 17:15:14 no-preload-806996 crio[559]: time="2025-10-19T17:15:14.648842917Z" level=info msg="Removing container: 794d02920d03582a1df0e11b4922e49d7e5e8f468aa2500ad25c48a30102b14b" id=a177b77f-6cbc-4086-802b-8ce8e964a03d name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 17:15:14 no-preload-806996 crio[559]: time="2025-10-19T17:15:14.665136363Z" level=info msg="Removed container 794d02920d03582a1df0e11b4922e49d7e5e8f468aa2500ad25c48a30102b14b: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s96d5/dashboard-metrics-scraper" id=a177b77f-6cbc-4086-802b-8ce8e964a03d name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 17:15:33 no-preload-806996 crio[559]: time="2025-10-19T17:15:33.576967804Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=2891ebb9-ec2f-4aa1-9d36-768dbf3a743f name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:15:33 no-preload-806996 crio[559]: time="2025-10-19T17:15:33.577925618Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=fcbfe6f5-bbde-4128-924e-3d5e3851e742 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:15:33 no-preload-806996 crio[559]: time="2025-10-19T17:15:33.578935404Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s96d5/dashboard-metrics-scraper" id=42c0ddf6-276d-41df-bb92-886e75b99117 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:15:33 no-preload-806996 crio[559]: time="2025-10-19T17:15:33.579231904Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:15:33 no-preload-806996 crio[559]: time="2025-10-19T17:15:33.585161646Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:15:33 no-preload-806996 crio[559]: time="2025-10-19T17:15:33.585629929Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:15:33 no-preload-806996 crio[559]: time="2025-10-19T17:15:33.630407052Z" level=info msg="Created container e06905acc29723550f6e08a642ca693a6217ddf074925f5b791b25019e9c975c: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s96d5/dashboard-metrics-scraper" id=42c0ddf6-276d-41df-bb92-886e75b99117 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:15:33 no-preload-806996 crio[559]: time="2025-10-19T17:15:33.631080531Z" level=info msg="Starting container: e06905acc29723550f6e08a642ca693a6217ddf074925f5b791b25019e9c975c" id=7645c12a-4a95-4b0a-9bb6-2b725a8ea603 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 17:15:33 no-preload-806996 crio[559]: time="2025-10-19T17:15:33.633169583Z" level=info msg="Started container" PID=1747 containerID=e06905acc29723550f6e08a642ca693a6217ddf074925f5b791b25019e9c975c description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s96d5/dashboard-metrics-scraper id=7645c12a-4a95-4b0a-9bb6-2b725a8ea603 name=/runtime.v1.RuntimeService/StartContainer sandboxID=45b9c7496b54b527258d3f06df78dc1260eaa53cb47dff8ab74c4b8b47970d0a
	Oct 19 17:15:33 no-preload-806996 crio[559]: time="2025-10-19T17:15:33.707781083Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=6aa3e0e3-e737-4de9-aba5-662227ab9fcf name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:15:33 no-preload-806996 crio[559]: time="2025-10-19T17:15:33.708749265Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=744d2e3d-d8c7-4228-8ad5-53bba62757cd name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:15:33 no-preload-806996 crio[559]: time="2025-10-19T17:15:33.709824488Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=d62ab1b4-0490-4263-a067-a286fec42254 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:15:33 no-preload-806996 crio[559]: time="2025-10-19T17:15:33.71015526Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:15:33 no-preload-806996 crio[559]: time="2025-10-19T17:15:33.712255134Z" level=info msg="Removing container: 61cf171e18d3bf6d01a701feda86b01ba9fff473f71c9aef818cad9a7381ce82" id=97c6c55d-a3d9-4891-9eb8-8ace769272e9 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 17:15:33 no-preload-806996 crio[559]: time="2025-10-19T17:15:33.714925804Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:15:33 no-preload-806996 crio[559]: time="2025-10-19T17:15:33.715138711Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/55b50df613592c7ef25de03b8250e5eee1e826c0c304a3ebc2b95c1cd1a82dca/merged/etc/passwd: no such file or directory"
	Oct 19 17:15:33 no-preload-806996 crio[559]: time="2025-10-19T17:15:33.715169688Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/55b50df613592c7ef25de03b8250e5eee1e826c0c304a3ebc2b95c1cd1a82dca/merged/etc/group: no such file or directory"
	Oct 19 17:15:33 no-preload-806996 crio[559]: time="2025-10-19T17:15:33.715469408Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:15:33 no-preload-806996 crio[559]: time="2025-10-19T17:15:33.735849171Z" level=info msg="Removed container 61cf171e18d3bf6d01a701feda86b01ba9fff473f71c9aef818cad9a7381ce82: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s96d5/dashboard-metrics-scraper" id=97c6c55d-a3d9-4891-9eb8-8ace769272e9 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 17:15:33 no-preload-806996 crio[559]: time="2025-10-19T17:15:33.746813627Z" level=info msg="Created container 382f76d5c0a2c6c8532920ea02a4812107d5461dd13a4f6d3d05edeadc2d5db6: kube-system/storage-provisioner/storage-provisioner" id=d62ab1b4-0490-4263-a067-a286fec42254 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:15:33 no-preload-806996 crio[559]: time="2025-10-19T17:15:33.747443384Z" level=info msg="Starting container: 382f76d5c0a2c6c8532920ea02a4812107d5461dd13a4f6d3d05edeadc2d5db6" id=05b11498-30ce-4896-a5a8-ce5dafe1cd3f name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 17:15:33 no-preload-806996 crio[559]: time="2025-10-19T17:15:33.749713979Z" level=info msg="Started container" PID=1757 containerID=382f76d5c0a2c6c8532920ea02a4812107d5461dd13a4f6d3d05edeadc2d5db6 description=kube-system/storage-provisioner/storage-provisioner id=05b11498-30ce-4896-a5a8-ce5dafe1cd3f name=/runtime.v1.RuntimeService/StartContainer sandboxID=57a0cbf50ea377e3b6c16260ec883cfd86d8f81c96a93f54ea65a283ec4c9a3b
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	382f76d5c0a2c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 seconds ago      Running             storage-provisioner         1                   57a0cbf50ea37       storage-provisioner                          kube-system
	e06905acc2972       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           21 seconds ago      Exited              dashboard-metrics-scraper   2                   45b9c7496b54b       dashboard-metrics-scraper-6ffb444bf9-s96d5   kubernetes-dashboard
	0a2f39ea915fe       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   44 seconds ago      Running             kubernetes-dashboard        0                   b61f8b3a8c704       kubernetes-dashboard-855c9754f9-8t886        kubernetes-dashboard
	df36eb3175777       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           52 seconds ago      Running             busybox                     1                   da2521ad0be89       busybox                                      default
	c4ea4d266cd9b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           52 seconds ago      Running             coredns                     0                   bfb2b4e132e9f       coredns-66bc5c9577-s4dxw                     kube-system
	8d4d5fee23b45       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           52 seconds ago      Running             kindnet-cni                 0                   9739150f72253       kindnet-zndcx                                kube-system
	47a6f8337391e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Exited              storage-provisioner         0                   57a0cbf50ea37       storage-provisioner                          kube-system
	b6a81c8dbabbd       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           52 seconds ago      Running             kube-proxy                  0                   a24d941074ec6       kube-proxy-76f5v                             kube-system
	57798f07866c6       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           55 seconds ago      Running             etcd                        0                   4887fa43c34be       etcd-no-preload-806996                       kube-system
	fce011c2a0450       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           55 seconds ago      Running             kube-apiserver              0                   fc97d656280e7       kube-apiserver-no-preload-806996             kube-system
	bc11f1b63d4f6       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           55 seconds ago      Running             kube-scheduler              0                   951a0793b721b       kube-scheduler-no-preload-806996             kube-system
	59114e638c3e3       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           55 seconds ago      Running             kube-controller-manager     0                   4e0d9b4f2640f       kube-controller-manager-no-preload-806996    kube-system
	
	
	==> coredns [c4ea4d266cd9b9e2ae1a1e77308823cee3564f252f0fe2fd4ee039dde6dedd7a] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38216 - 37030 "HINFO IN 7320756310291767829.2100046436382029024. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.080923161s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-806996
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-806996
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34
	                    minikube.k8s.io/name=no-preload-806996
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T17_14_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 17:14:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-806996
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 17:15:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 17:15:33 +0000   Sun, 19 Oct 2025 17:14:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 17:15:33 +0000   Sun, 19 Oct 2025 17:14:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 17:15:33 +0000   Sun, 19 Oct 2025 17:14:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 17:15:33 +0000   Sun, 19 Oct 2025 17:15:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-806996
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                18a9e783-21eb-4794-bbc4-d787e21fb79d
	  Boot ID:                    74c6737e-3c93-45ee-b810-4373d2d70b9b
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-66bc5c9577-s4dxw                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     105s
	  kube-system                 etcd-no-preload-806996                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         111s
	  kube-system                 kindnet-zndcx                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      105s
	  kube-system                 kube-apiserver-no-preload-806996              250m (3%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-controller-manager-no-preload-806996     200m (2%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-proxy-76f5v                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-no-preload-806996              100m (1%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-s96d5    0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-8t886         0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 104s                 kube-proxy       
	  Normal  Starting                 52s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  116s (x8 over 116s)  kubelet          Node no-preload-806996 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    116s (x8 over 116s)  kubelet          Node no-preload-806996 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     116s (x8 over 116s)  kubelet          Node no-preload-806996 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     111s                 kubelet          Node no-preload-806996 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  111s                 kubelet          Node no-preload-806996 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    111s                 kubelet          Node no-preload-806996 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 111s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           106s                 node-controller  Node no-preload-806996 event: Registered Node no-preload-806996 in Controller
	  Normal  NodeReady                92s                  kubelet          Node no-preload-806996 status is now: NodeReady
	  Normal  Starting                 56s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  56s (x8 over 56s)    kubelet          Node no-preload-806996 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    56s (x8 over 56s)    kubelet          Node no-preload-806996 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s (x8 over 56s)    kubelet          Node no-preload-806996 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           50s                  node-controller  Node no-preload-806996 event: Registered Node no-preload-806996 in Controller
	
	
	==> dmesg <==
	[  +0.102617] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027869] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.458770] kauditd_printk_skb: 47 callbacks suppressed
	[Oct19 16:23] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.054620] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023908] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023878] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023848] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023943] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +2.047764] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +4.031517] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +8.127172] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[ +16.382276] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[Oct19 16:24] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	
	
	==> etcd [57798f07866c641800ed16ace6a8acd5b23639cda988891b8373c1b5db7e8dca] <==
	{"level":"warn","ts":"2025-10-19T17:15:01.746530Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:01.753061Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:01.759509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:01.766222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:01.778270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:01.784972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:01.792274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:01.798719Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:01.805122Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:01.811095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:01.817519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:01.824043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:01.831447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:01.837495Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:01.844426Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:01.851408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:01.862171Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:01.869313Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:01.875709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:01.927354Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:36.395538Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"287.779097ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-19T17:15:36.395646Z","caller":"traceutil/trace.go:172","msg":"trace[1325953621] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:662; }","duration":"287.90836ms","start":"2025-10-19T17:15:36.107720Z","end":"2025-10-19T17:15:36.395629Z","steps":["trace[1325953621] 'range keys from in-memory index tree'  (duration: 287.728697ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-19T17:15:36.396282Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"266.167898ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356070003552425 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:589 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:4373 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-19T17:15:36.396447Z","caller":"traceutil/trace.go:172","msg":"trace[1111645797] transaction","detail":"{read_only:false; response_revision:663; number_of_response:1; }","duration":"396.631894ms","start":"2025-10-19T17:15:35.999793Z","end":"2025-10-19T17:15:36.396425Z","steps":["trace[1111645797] 'process raft request'  (duration: 129.743372ms)","trace[1111645797] 'compare'  (duration: 265.992638ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-19T17:15:36.396757Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-19T17:15:35.999771Z","time spent":"396.815389ms","remote":"127.0.0.1:43262","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4422,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:589 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:4373 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >"}
	
	
	==> kernel <==
	 17:15:55 up 58 min,  0 user,  load average: 4.00, 3.00, 1.80
	Linux no-preload-806996 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8d4d5fee23b457fb8794d8484d04e1e2bd58f052ff7b005cbc54b4452aacbedf] <==
	I1019 17:15:03.211298       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 17:15:03.211636       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1019 17:15:03.211868       1 main.go:148] setting mtu 1500 for CNI 
	I1019 17:15:03.211891       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 17:15:03.211914       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T17:15:03Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 17:15:03.508830       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 17:15:03.508949       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 17:15:03.508986       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 17:15:03.509411       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1019 17:15:03.809351       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 17:15:03.809391       1 metrics.go:72] Registering metrics
	I1019 17:15:03.809510       1 controller.go:711] "Syncing nftables rules"
	I1019 17:15:13.432552       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1019 17:15:13.432611       1 main.go:301] handling current node
	I1019 17:15:23.432610       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1019 17:15:23.432645       1 main.go:301] handling current node
	I1019 17:15:33.431908       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1019 17:15:33.431946       1 main.go:301] handling current node
	I1019 17:15:43.433204       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1019 17:15:43.433239       1 main.go:301] handling current node
	I1019 17:15:53.441177       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1019 17:15:53.441207       1 main.go:301] handling current node
	
	
	==> kube-apiserver [fce011c2a0450511fcc8dd7c1c20bab17cded7471868ef01ff9f8bd81c4e288b] <==
	I1019 17:15:02.404544       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1019 17:15:02.403672       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1019 17:15:02.403333       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1019 17:15:02.405350       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1019 17:15:02.405397       1 aggregator.go:171] initial CRD sync complete...
	I1019 17:15:02.405416       1 autoregister_controller.go:144] Starting autoregister controller
	I1019 17:15:02.405422       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1019 17:15:02.405428       1 cache.go:39] Caches are synced for autoregister controller
	E1019 17:15:02.410489       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1019 17:15:02.412269       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1019 17:15:02.417886       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1019 17:15:02.427812       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1019 17:15:02.427842       1 policy_source.go:240] refreshing policies
	I1019 17:15:02.459523       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 17:15:02.556595       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 17:15:02.691954       1 controller.go:667] quota admission added evaluator for: namespaces
	I1019 17:15:02.724574       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1019 17:15:02.743142       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 17:15:02.750259       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 17:15:02.793748       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.60.237"}
	I1019 17:15:02.803954       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.119.238"}
	I1019 17:15:03.305194       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 17:15:05.793250       1 controller.go:667] quota admission added evaluator for: endpoints
	I1019 17:15:06.140307       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1019 17:15:06.290363       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [59114e638c3e345b69d996de509f78fbdb413e207c5f1f5aaa29fa9072561ec7] <==
	I1019 17:15:05.703786       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1019 17:15:05.706122       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1019 17:15:05.708454       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1019 17:15:05.718783       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1019 17:15:05.736258       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1019 17:15:05.736391       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1019 17:15:05.736416       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1019 17:15:05.736730       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1019 17:15:05.736835       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1019 17:15:05.736873       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1019 17:15:05.737035       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1019 17:15:05.737718       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1019 17:15:05.737731       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1019 17:15:05.737724       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1019 17:15:05.737812       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1019 17:15:05.737831       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1019 17:15:05.737919       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1019 17:15:05.738095       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-806996"
	I1019 17:15:05.738159       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1019 17:15:05.740230       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1019 17:15:05.740323       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1019 17:15:05.741465       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1019 17:15:05.743207       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1019 17:15:05.745603       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 17:15:05.769107       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [b6a81c8dbabbdc0f923d7667d7492b6e13121a27eaf4b8c9c2155e06d06dda4c] <==
	I1019 17:15:02.986042       1 server_linux.go:53] "Using iptables proxy"
	I1019 17:15:03.051892       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 17:15:03.152047       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 17:15:03.152114       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1019 17:15:03.152180       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 17:15:03.171012       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 17:15:03.171085       1 server_linux.go:132] "Using iptables Proxier"
	I1019 17:15:03.176376       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 17:15:03.176848       1 server.go:527] "Version info" version="v1.34.1"
	I1019 17:15:03.176893       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:15:03.178657       1 config.go:106] "Starting endpoint slice config controller"
	I1019 17:15:03.178774       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 17:15:03.178921       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 17:15:03.179011       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 17:15:03.179479       1 config.go:309] "Starting node config controller"
	I1019 17:15:03.179564       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 17:15:03.179590       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 17:15:03.179013       1 config.go:200] "Starting service config controller"
	I1019 17:15:03.180594       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 17:15:03.279093       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1019 17:15:03.280297       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 17:15:03.281490       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [bc11f1b63d4f685d90c3f222bd54906e082991e7bbcad2b179d7e8a591d49f53] <==
	I1019 17:15:00.600326       1 serving.go:386] Generated self-signed cert in-memory
	W1019 17:15:02.338770       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1019 17:15:02.338827       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1019 17:15:02.338840       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1019 17:15:02.338854       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1019 17:15:02.382700       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1019 17:15:02.382738       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:15:02.385908       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 17:15:02.386031       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 17:15:02.387127       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1019 17:15:02.387293       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1019 17:15:02.486493       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 19 17:15:05 no-preload-806996 kubelet[710]: I1019 17:15:05.943279     710 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 19 17:15:06 no-preload-806996 kubelet[710]: I1019 17:15:06.476259     710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/21d75a06-e2e2-4dc0-b5d9-58b551d6f1e7-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-8t886\" (UID: \"21d75a06-e2e2-4dc0-b5d9-58b551d6f1e7\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8t886"
	Oct 19 17:15:06 no-preload-806996 kubelet[710]: I1019 17:15:06.476305     710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/6d910290-e686-4092-b130-ac3aae81b534-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-s96d5\" (UID: \"6d910290-e686-4092-b130-ac3aae81b534\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s96d5"
	Oct 19 17:15:06 no-preload-806996 kubelet[710]: I1019 17:15:06.476321     710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btkmf\" (UniqueName: \"kubernetes.io/projected/21d75a06-e2e2-4dc0-b5d9-58b551d6f1e7-kube-api-access-btkmf\") pod \"kubernetes-dashboard-855c9754f9-8t886\" (UID: \"21d75a06-e2e2-4dc0-b5d9-58b551d6f1e7\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8t886"
	Oct 19 17:15:06 no-preload-806996 kubelet[710]: I1019 17:15:06.476347     710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6ksz\" (UniqueName: \"kubernetes.io/projected/6d910290-e686-4092-b130-ac3aae81b534-kube-api-access-p6ksz\") pod \"dashboard-metrics-scraper-6ffb444bf9-s96d5\" (UID: \"6d910290-e686-4092-b130-ac3aae81b534\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s96d5"
	Oct 19 17:15:12 no-preload-806996 kubelet[710]: I1019 17:15:12.579690     710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8t886" podStartSLOduration=3.15677728 podStartE2EDuration="6.579664106s" podCreationTimestamp="2025-10-19 17:15:06 +0000 UTC" firstStartedPulling="2025-10-19 17:15:06.693290995 +0000 UTC m=+7.210931605" lastFinishedPulling="2025-10-19 17:15:10.116177818 +0000 UTC m=+10.633818431" observedRunningTime="2025-10-19 17:15:10.652357832 +0000 UTC m=+11.169998452" watchObservedRunningTime="2025-10-19 17:15:12.579664106 +0000 UTC m=+13.097304719"
	Oct 19 17:15:13 no-preload-806996 kubelet[710]: I1019 17:15:13.641911     710 scope.go:117] "RemoveContainer" containerID="794d02920d03582a1df0e11b4922e49d7e5e8f468aa2500ad25c48a30102b14b"
	Oct 19 17:15:14 no-preload-806996 kubelet[710]: I1019 17:15:14.646862     710 scope.go:117] "RemoveContainer" containerID="794d02920d03582a1df0e11b4922e49d7e5e8f468aa2500ad25c48a30102b14b"
	Oct 19 17:15:14 no-preload-806996 kubelet[710]: I1019 17:15:14.647087     710 scope.go:117] "RemoveContainer" containerID="61cf171e18d3bf6d01a701feda86b01ba9fff473f71c9aef818cad9a7381ce82"
	Oct 19 17:15:14 no-preload-806996 kubelet[710]: E1019 17:15:14.647267     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s96d5_kubernetes-dashboard(6d910290-e686-4092-b130-ac3aae81b534)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s96d5" podUID="6d910290-e686-4092-b130-ac3aae81b534"
	Oct 19 17:15:15 no-preload-806996 kubelet[710]: I1019 17:15:15.651858     710 scope.go:117] "RemoveContainer" containerID="61cf171e18d3bf6d01a701feda86b01ba9fff473f71c9aef818cad9a7381ce82"
	Oct 19 17:15:15 no-preload-806996 kubelet[710]: E1019 17:15:15.652144     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s96d5_kubernetes-dashboard(6d910290-e686-4092-b130-ac3aae81b534)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s96d5" podUID="6d910290-e686-4092-b130-ac3aae81b534"
	Oct 19 17:15:20 no-preload-806996 kubelet[710]: I1019 17:15:20.120529     710 scope.go:117] "RemoveContainer" containerID="61cf171e18d3bf6d01a701feda86b01ba9fff473f71c9aef818cad9a7381ce82"
	Oct 19 17:15:20 no-preload-806996 kubelet[710]: E1019 17:15:20.120814     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s96d5_kubernetes-dashboard(6d910290-e686-4092-b130-ac3aae81b534)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s96d5" podUID="6d910290-e686-4092-b130-ac3aae81b534"
	Oct 19 17:15:33 no-preload-806996 kubelet[710]: I1019 17:15:33.576521     710 scope.go:117] "RemoveContainer" containerID="61cf171e18d3bf6d01a701feda86b01ba9fff473f71c9aef818cad9a7381ce82"
	Oct 19 17:15:33 no-preload-806996 kubelet[710]: I1019 17:15:33.707410     710 scope.go:117] "RemoveContainer" containerID="47a6f8337391e92a1eb11ec931d4dcb05e8cc253a9eff0440440e61a960f5336"
	Oct 19 17:15:33 no-preload-806996 kubelet[710]: I1019 17:15:33.709367     710 scope.go:117] "RemoveContainer" containerID="61cf171e18d3bf6d01a701feda86b01ba9fff473f71c9aef818cad9a7381ce82"
	Oct 19 17:15:33 no-preload-806996 kubelet[710]: I1019 17:15:33.709679     710 scope.go:117] "RemoveContainer" containerID="e06905acc29723550f6e08a642ca693a6217ddf074925f5b791b25019e9c975c"
	Oct 19 17:15:33 no-preload-806996 kubelet[710]: E1019 17:15:33.709843     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s96d5_kubernetes-dashboard(6d910290-e686-4092-b130-ac3aae81b534)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s96d5" podUID="6d910290-e686-4092-b130-ac3aae81b534"
	Oct 19 17:15:40 no-preload-806996 kubelet[710]: I1019 17:15:40.120348     710 scope.go:117] "RemoveContainer" containerID="e06905acc29723550f6e08a642ca693a6217ddf074925f5b791b25019e9c975c"
	Oct 19 17:15:40 no-preload-806996 kubelet[710]: E1019 17:15:40.120555     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s96d5_kubernetes-dashboard(6d910290-e686-4092-b130-ac3aae81b534)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s96d5" podUID="6d910290-e686-4092-b130-ac3aae81b534"
	Oct 19 17:15:50 no-preload-806996 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 19 17:15:50 no-preload-806996 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 19 17:15:50 no-preload-806996 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 19 17:15:50 no-preload-806996 systemd[1]: kubelet.service: Consumed 1.701s CPU time.
	
	
	==> kubernetes-dashboard [0a2f39ea915fe96227067df463e324062c88165d1d33630023a26f599191e95e] <==
	2025/10/19 17:15:10 Starting overwatch
	2025/10/19 17:15:10 Using namespace: kubernetes-dashboard
	2025/10/19 17:15:10 Using in-cluster config to connect to apiserver
	2025/10/19 17:15:10 Using secret token for csrf signing
	2025/10/19 17:15:10 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/19 17:15:10 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/19 17:15:10 Successful initial request to the apiserver, version: v1.34.1
	2025/10/19 17:15:10 Generating JWE encryption key
	2025/10/19 17:15:10 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/19 17:15:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/19 17:15:10 Initializing JWE encryption key from synchronized object
	2025/10/19 17:15:10 Creating in-cluster Sidecar client
	2025/10/19 17:15:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/19 17:15:10 Serving insecurely on HTTP port: 9090
	2025/10/19 17:15:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [382f76d5c0a2c6c8532920ea02a4812107d5461dd13a4f6d3d05edeadc2d5db6] <==
	I1019 17:15:33.762939       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1019 17:15:33.770989       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1019 17:15:33.771048       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1019 17:15:33.773274       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:15:37.228985       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:15:41.489843       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:15:45.087965       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:15:48.142250       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:15:51.164285       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:15:51.168454       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 17:15:51.168625       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1019 17:15:51.168754       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ade808a3-50b3-4da9-9740-0f1294aa75ce", APIVersion:"v1", ResourceVersion:"669", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-806996_06456bdc-e381-45d2-84d4-72bdf680e14a became leader
	I1019 17:15:51.168804       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-806996_06456bdc-e381-45d2-84d4-72bdf680e14a!
	W1019 17:15:51.170559       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:15:51.174574       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 17:15:51.269126       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-806996_06456bdc-e381-45d2-84d4-72bdf680e14a!
	W1019 17:15:53.185998       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:15:53.193861       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:15:55.197424       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:15:55.202184       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [47a6f8337391e92a1eb11ec931d4dcb05e8cc253a9eff0440440e61a960f5336] <==
	I1019 17:15:02.956331       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1019 17:15:32.960600       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-806996 -n no-preload-806996
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-806996 -n no-preload-806996: exit status 2 (329.51415ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-806996 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-090139 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-090139 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (568.577351ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:16:03Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-090139 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-090139 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-090139 describe deploy/metrics-server -n kube-system: exit status 1 (137.162351ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-090139 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-090139
helpers_test.go:243: (dbg) docker inspect embed-certs-090139:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "491b138dfd3b0f8450a43a099d1e6d3f34448c3605ec02b7f98ffd5aefd0c3d3",
	        "Created": "2025-10-19T17:15:20.164222926Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 251944,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T17:15:20.200331713Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/491b138dfd3b0f8450a43a099d1e6d3f34448c3605ec02b7f98ffd5aefd0c3d3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/491b138dfd3b0f8450a43a099d1e6d3f34448c3605ec02b7f98ffd5aefd0c3d3/hostname",
	        "HostsPath": "/var/lib/docker/containers/491b138dfd3b0f8450a43a099d1e6d3f34448c3605ec02b7f98ffd5aefd0c3d3/hosts",
	        "LogPath": "/var/lib/docker/containers/491b138dfd3b0f8450a43a099d1e6d3f34448c3605ec02b7f98ffd5aefd0c3d3/491b138dfd3b0f8450a43a099d1e6d3f34448c3605ec02b7f98ffd5aefd0c3d3-json.log",
	        "Name": "/embed-certs-090139",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-090139:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-090139",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "491b138dfd3b0f8450a43a099d1e6d3f34448c3605ec02b7f98ffd5aefd0c3d3",
	                "LowerDir": "/var/lib/docker/overlay2/adea2bc670d3c2f94262acc648cd1d97c1ba620ee9d7f9af5505590dd624f110-init/diff:/var/lib/docker/overlay2/0516a6de822f96a429e000bb6523437ca93a66a6aecaba561c6abf45d0d1defe/diff",
	                "MergedDir": "/var/lib/docker/overlay2/adea2bc670d3c2f94262acc648cd1d97c1ba620ee9d7f9af5505590dd624f110/merged",
	                "UpperDir": "/var/lib/docker/overlay2/adea2bc670d3c2f94262acc648cd1d97c1ba620ee9d7f9af5505590dd624f110/diff",
	                "WorkDir": "/var/lib/docker/overlay2/adea2bc670d3c2f94262acc648cd1d97c1ba620ee9d7f9af5505590dd624f110/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-090139",
	                "Source": "/var/lib/docker/volumes/embed-certs-090139/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-090139",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-090139",
	                "name.minikube.sigs.k8s.io": "embed-certs-090139",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1a3b15ab7e6bcd4d3ebff2fc9d167d978e18dbf8e7b19f5ffa6d25fc2578b212",
	            "SandboxKey": "/var/run/docker/netns/1a3b15ab7e6b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33075"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-090139": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fe:86:89:38:ea:a3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9f3b41047906a4786b547f272192944794206cd82d35412a1c4498289619b68a",
	                    "EndpointID": "409c5af6c6a44168ee8972a651ee2e57139d1a402c2b16d8d768028467479e74",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-090139",
	                        "491b138dfd3b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-090139 -n embed-certs-090139
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-090139 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-090139 logs -n 25: (1.099276406s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p missing-upgrade-447724                                                                                                                                                                                                                     │ missing-upgrade-447724       │ jenkins │ v1.37.0 │ 19 Oct 25 17:13 UTC │ 19 Oct 25 17:13 UTC │
	│ start   │ -p no-preload-806996 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-806996            │ jenkins │ v1.37.0 │ 19 Oct 25 17:13 UTC │ 19 Oct 25 17:14 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-904967 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-904967       │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │                     │
	│ stop    │ -p old-k8s-version-904967 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-904967       │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │ 19 Oct 25 17:14 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-904967 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-904967       │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │ 19 Oct 25 17:14 UTC │
	│ start   │ -p old-k8s-version-904967 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-904967       │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │ 19 Oct 25 17:15 UTC │
	│ addons  │ enable metrics-server -p no-preload-806996 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-806996            │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │                     │
	│ stop    │ -p no-preload-806996 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-806996            │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │ 19 Oct 25 17:14 UTC │
	│ addons  │ enable dashboard -p no-preload-806996 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-806996            │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │ 19 Oct 25 17:14 UTC │
	│ start   │ -p no-preload-806996 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-806996            │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │ 19 Oct 25 17:15 UTC │
	│ start   │ -p cert-expiration-132648 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-132648       │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ delete  │ -p cert-expiration-132648                                                                                                                                                                                                                     │ cert-expiration-132648       │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ start   │ -p embed-certs-090139 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-090139           │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ image   │ old-k8s-version-904967 image list --format=json                                                                                                                                                                                               │ old-k8s-version-904967       │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ pause   │ -p old-k8s-version-904967 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-904967       │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │                     │
	│ delete  │ -p old-k8s-version-904967                                                                                                                                                                                                                     │ old-k8s-version-904967       │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ delete  │ -p old-k8s-version-904967                                                                                                                                                                                                                     │ old-k8s-version-904967       │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ delete  │ -p disable-driver-mounts-858297                                                                                                                                                                                                               │ disable-driver-mounts-858297 │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ start   │ -p default-k8s-diff-port-663015 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-663015 │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │                     │
	│ image   │ no-preload-806996 image list --format=json                                                                                                                                                                                                    │ no-preload-806996            │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ pause   │ -p no-preload-806996 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-806996            │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │                     │
	│ delete  │ -p no-preload-806996                                                                                                                                                                                                                          │ no-preload-806996            │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ delete  │ -p no-preload-806996                                                                                                                                                                                                                          │ no-preload-806996            │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ start   │ -p newest-cni-848035 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-848035            │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-090139 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-090139           │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 17:15:59
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 17:15:59.322544  262636 out.go:360] Setting OutFile to fd 1 ...
	I1019 17:15:59.322692  262636 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:15:59.322702  262636 out.go:374] Setting ErrFile to fd 2...
	I1019 17:15:59.322709  262636 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:15:59.322927  262636 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3731/.minikube/bin
	I1019 17:15:59.323472  262636 out.go:368] Setting JSON to false
	I1019 17:15:59.325015  262636 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3505,"bootTime":1760890654,"procs":343,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 17:15:59.325089  262636 start.go:143] virtualization: kvm guest
	I1019 17:15:59.327123  262636 out.go:179] * [newest-cni-848035] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 17:15:59.329230  262636 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 17:15:59.329273  262636 notify.go:221] Checking for updates...
	I1019 17:15:59.332618  262636 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 17:15:59.334099  262636 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-3731/kubeconfig
	I1019 17:15:59.335407  262636 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-3731/.minikube
	I1019 17:15:59.336570  262636 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 17:15:59.337968  262636 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 17:15:59.339533  262636 config.go:182] Loaded profile config "default-k8s-diff-port-663015": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:15:59.339667  262636 config.go:182] Loaded profile config "embed-certs-090139": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:15:59.339792  262636 config.go:182] Loaded profile config "kubernetes-upgrade-318879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:15:59.339901  262636 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 17:15:59.368133  262636 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1019 17:15:59.368214  262636 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:15:59.440031  262636 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-19 17:15:59.42241722 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 17:15:59.440197  262636 docker.go:319] overlay module found
	I1019 17:15:59.444982  262636 out.go:179] * Using the docker driver based on user configuration
	I1019 17:15:58.935258  256207 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-663015"
	I1019 17:15:58.935301  256207 host.go:66] Checking if "default-k8s-diff-port-663015" exists ...
	I1019 17:15:58.935764  256207 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-663015 --format={{.State.Status}}
	I1019 17:15:58.936375  256207 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:15:58.936393  256207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 17:15:58.936455  256207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-663015
	I1019 17:15:58.966134  256207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33079 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/default-k8s-diff-port-663015/id_rsa Username:docker}
	I1019 17:15:58.975246  256207 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 17:15:58.975271  256207 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 17:15:58.975334  256207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-663015
	I1019 17:15:59.006504  256207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33079 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/default-k8s-diff-port-663015/id_rsa Username:docker}
	I1019 17:15:59.027604  256207 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1019 17:15:59.104432  256207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:15:59.104890  256207 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:15:59.125546  256207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 17:15:59.240964  256207 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1019 17:15:59.481139  256207 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-663015" to be "Ready" ...
	I1019 17:15:59.482157  256207 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1019 17:15:59.446341  262636 start.go:309] selected driver: docker
	I1019 17:15:59.446424  262636 start.go:930] validating driver "docker" against <nil>
	I1019 17:15:59.446443  262636 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 17:15:59.447427  262636 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:15:59.513489  262636 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-19 17:15:59.50380274 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 17:15:59.513666  262636 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1019 17:15:59.513700  262636 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1019 17:15:59.513991  262636 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1019 17:15:59.515437  262636 out.go:179] * Using Docker driver with root privileges
	I1019 17:15:59.516540  262636 cni.go:84] Creating CNI manager for ""
	I1019 17:15:59.516619  262636 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:15:59.516633  262636 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1019 17:15:59.516756  262636 start.go:353] cluster config:
	{Name:newest-cni-848035 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-848035 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:15:59.518966  262636 out.go:179] * Starting "newest-cni-848035" primary control-plane node in "newest-cni-848035" cluster
	I1019 17:15:59.520393  262636 cache.go:124] Beginning downloading kic base image for docker with crio
	I1019 17:15:59.521668  262636 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 17:15:59.522998  262636 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:15:59.523036  262636 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 17:15:59.523043  262636 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-3731/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1019 17:15:59.523081  262636 cache.go:59] Caching tarball of preloaded images
	I1019 17:15:59.523178  262636 preload.go:233] Found /home/jenkins/minikube-integration/21683-3731/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1019 17:15:59.523194  262636 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 17:15:59.523301  262636 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/newest-cni-848035/config.json ...
	I1019 17:15:59.523328  262636 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/newest-cni-848035/config.json: {Name:mk46f943bf6dbbc8e42c314c2013533e30a03ab8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:15:59.544936  262636 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 17:15:59.544955  262636 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 17:15:59.544972  262636 cache.go:233] Successfully downloaded all kic artifacts
	I1019 17:15:59.545002  262636 start.go:360] acquireMachinesLock for newest-cni-848035: {Name:mk73020b94db81f5952879aa2f581596a932c88c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 17:15:59.545152  262636 start.go:364] duration metric: took 117.087µs to acquireMachinesLock for "newest-cni-848035"
	I1019 17:15:59.545188  262636 start.go:93] Provisioning new machine with config: &{Name:newest-cni-848035 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-848035 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 17:15:59.545250  262636 start.go:125] createHost starting for "" (driver="docker")
	I1019 17:15:59.483617  256207 addons.go:515] duration metric: took 576.969555ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1019 17:15:59.748741  256207 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-663015" context rescaled to 1 replicas
	W1019 17:16:01.484640  256207 node_ready.go:57] node "default-k8s-diff-port-663015" has "Ready":"False" status (will retry)
	I1019 17:15:58.515343  219832 cri.go:89] found id: ""
	I1019 17:15:58.515372  219832 logs.go:282] 0 containers: []
	W1019 17:15:58.515381  219832 logs.go:284] No container was found matching "kindnet"
	I1019 17:15:58.515388  219832 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1019 17:15:58.515462  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1019 17:15:58.546610  219832 cri.go:89] found id: ""
	I1019 17:15:58.546632  219832 logs.go:282] 0 containers: []
	W1019 17:15:58.546640  219832 logs.go:284] No container was found matching "storage-provisioner"
	I1019 17:15:58.546654  219832 logs.go:123] Gathering logs for dmesg ...
	I1019 17:15:58.546676  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1019 17:15:58.561879  219832 logs.go:123] Gathering logs for kube-apiserver [55181ab12a75f0828e8a499b545960e5dfe82c07024572a8d9b4566b0da1fba4] ...
	I1019 17:15:58.561909  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 55181ab12a75f0828e8a499b545960e5dfe82c07024572a8d9b4566b0da1fba4"
	I1019 17:15:58.599537  219832 logs.go:123] Gathering logs for kube-apiserver [9069a5be85580c182da77b88270f9798850d1b399012f0273ac8911e2aca89dc] ...
	I1019 17:15:58.599572  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9069a5be85580c182da77b88270f9798850d1b399012f0273ac8911e2aca89dc"
	I1019 17:15:58.635426  219832 logs.go:123] Gathering logs for kube-controller-manager [fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa] ...
	I1019 17:15:58.635462  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa"
	I1019 17:15:58.667022  219832 logs.go:123] Gathering logs for kubelet ...
	I1019 17:15:58.667052  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1019 17:15:58.796453  219832 logs.go:123] Gathering logs for describe nodes ...
	I1019 17:15:58.796484  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1019 17:15:58.866320  219832 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1019 17:15:58.866352  219832 logs.go:123] Gathering logs for kube-scheduler [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f] ...
	I1019 17:15:58.866369  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:15:58.944419  219832 logs.go:123] Gathering logs for kube-controller-manager [44a80a801bf4fae14b89098891aceb014a4308a5880e6bb47d1db7c41321104d] ...
	I1019 17:15:58.944459  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 44a80a801bf4fae14b89098891aceb014a4308a5880e6bb47d1db7c41321104d"
	I1019 17:15:58.997705  219832 logs.go:123] Gathering logs for CRI-O ...
	I1019 17:15:58.997777  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1019 17:15:59.084149  219832 logs.go:123] Gathering logs for container status ...
	I1019 17:15:59.084191  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1019 17:16:01.635141  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:16:01.635694  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1019 17:16:01.635751  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1019 17:16:01.635818  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1019 17:16:01.666235  219832 cri.go:89] found id: "55181ab12a75f0828e8a499b545960e5dfe82c07024572a8d9b4566b0da1fba4"
	I1019 17:16:01.666260  219832 cri.go:89] found id: ""
	I1019 17:16:01.666269  219832 logs.go:282] 1 containers: [55181ab12a75f0828e8a499b545960e5dfe82c07024572a8d9b4566b0da1fba4]
	I1019 17:16:01.666333  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:16:01.670421  219832 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1019 17:16:01.670497  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1019 17:16:01.701922  219832 cri.go:89] found id: ""
	I1019 17:16:01.701954  219832 logs.go:282] 0 containers: []
	W1019 17:16:01.701972  219832 logs.go:284] No container was found matching "etcd"
	I1019 17:16:01.701979  219832 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1019 17:16:01.702037  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1019 17:16:01.730807  219832 cri.go:89] found id: ""
	I1019 17:16:01.730832  219832 logs.go:282] 0 containers: []
	W1019 17:16:01.730842  219832 logs.go:284] No container was found matching "coredns"
	I1019 17:16:01.730849  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1019 17:16:01.730913  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1019 17:16:01.759751  219832 cri.go:89] found id: "409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:16:01.759777  219832 cri.go:89] found id: ""
	I1019 17:16:01.759785  219832 logs.go:282] 1 containers: [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f]
	I1019 17:16:01.759843  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:16:01.763917  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1019 17:16:01.763990  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1019 17:16:01.791456  219832 cri.go:89] found id: ""
	I1019 17:16:01.791487  219832 logs.go:282] 0 containers: []
	W1019 17:16:01.791498  219832 logs.go:284] No container was found matching "kube-proxy"
	I1019 17:16:01.791507  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1019 17:16:01.791566  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1019 17:16:01.823314  219832 cri.go:89] found id: "44a80a801bf4fae14b89098891aceb014a4308a5880e6bb47d1db7c41321104d"
	I1019 17:16:01.823336  219832 cri.go:89] found id: "fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa"
	I1019 17:16:01.823340  219832 cri.go:89] found id: ""
	I1019 17:16:01.823347  219832 logs.go:282] 2 containers: [44a80a801bf4fae14b89098891aceb014a4308a5880e6bb47d1db7c41321104d fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa]
	I1019 17:16:01.823393  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:16:01.827617  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:16:01.831523  219832 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1019 17:16:01.831589  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1019 17:16:01.860369  219832 cri.go:89] found id: ""
	I1019 17:16:01.860392  219832 logs.go:282] 0 containers: []
	W1019 17:16:01.860400  219832 logs.go:284] No container was found matching "kindnet"
	I1019 17:16:01.860405  219832 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1019 17:16:01.860449  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1019 17:16:01.889639  219832 cri.go:89] found id: ""
	I1019 17:16:01.889666  219832 logs.go:282] 0 containers: []
	W1019 17:16:01.889677  219832 logs.go:284] No container was found matching "storage-provisioner"
	I1019 17:16:01.889693  219832 logs.go:123] Gathering logs for dmesg ...
	I1019 17:16:01.889706  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1019 17:16:01.906946  219832 logs.go:123] Gathering logs for describe nodes ...
	I1019 17:16:01.906978  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1019 17:16:01.966849  219832 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1019 17:16:01.966874  219832 logs.go:123] Gathering logs for kube-apiserver [55181ab12a75f0828e8a499b545960e5dfe82c07024572a8d9b4566b0da1fba4] ...
	I1019 17:16:01.966891  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 55181ab12a75f0828e8a499b545960e5dfe82c07024572a8d9b4566b0da1fba4"
	I1019 17:16:02.002930  219832 logs.go:123] Gathering logs for kube-controller-manager [fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa] ...
	I1019 17:16:02.002968  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa"
	I1019 17:16:02.030485  219832 logs.go:123] Gathering logs for CRI-O ...
	I1019 17:16:02.030518  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1019 17:16:02.082074  219832 logs.go:123] Gathering logs for kubelet ...
	I1019 17:16:02.082110  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1019 17:16:02.183474  219832 logs.go:123] Gathering logs for kube-scheduler [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f] ...
	I1019 17:16:02.183506  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:16:02.245100  219832 logs.go:123] Gathering logs for kube-controller-manager [44a80a801bf4fae14b89098891aceb014a4308a5880e6bb47d1db7c41321104d] ...
	I1019 17:16:02.245143  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 44a80a801bf4fae14b89098891aceb014a4308a5880e6bb47d1db7c41321104d"
	I1019 17:16:02.274811  219832 logs.go:123] Gathering logs for container status ...
	I1019 17:16:02.274842  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1019 17:15:59.548160  262636 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1019 17:15:59.548357  262636 start.go:159] libmachine.API.Create for "newest-cni-848035" (driver="docker")
	I1019 17:15:59.548390  262636 client.go:171] LocalClient.Create starting
	I1019 17:15:59.548488  262636 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem
	I1019 17:15:59.548521  262636 main.go:143] libmachine: Decoding PEM data...
	I1019 17:15:59.548535  262636 main.go:143] libmachine: Parsing certificate...
	I1019 17:15:59.548585  262636 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-3731/.minikube/certs/cert.pem
	I1019 17:15:59.548607  262636 main.go:143] libmachine: Decoding PEM data...
	I1019 17:15:59.548617  262636 main.go:143] libmachine: Parsing certificate...
	I1019 17:15:59.548992  262636 cli_runner.go:164] Run: docker network inspect newest-cni-848035 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1019 17:15:59.567144  262636 cli_runner.go:211] docker network inspect newest-cni-848035 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1019 17:15:59.567225  262636 network_create.go:284] running [docker network inspect newest-cni-848035] to gather additional debugging logs...
	I1019 17:15:59.567246  262636 cli_runner.go:164] Run: docker network inspect newest-cni-848035
	W1019 17:15:59.587004  262636 cli_runner.go:211] docker network inspect newest-cni-848035 returned with exit code 1
	I1019 17:15:59.587042  262636 network_create.go:287] error running [docker network inspect newest-cni-848035]: docker network inspect newest-cni-848035: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-848035 not found
	I1019 17:15:59.587061  262636 network_create.go:289] output of [docker network inspect newest-cni-848035]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-848035 not found
	
	** /stderr **
	I1019 17:15:59.587186  262636 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 17:15:59.606238  262636 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-96cf7041f267 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:7a:ea:91:e3:37:25} reservation:<nil>}
	I1019 17:15:59.606955  262636 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-0f2c415cfca9 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:96:f0:8a:e9:5f:de} reservation:<nil>}
	I1019 17:15:59.607693  262636 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ca739aebb768 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:a6:81:0d:b3:5e:ec} reservation:<nil>}
	I1019 17:15:59.608518  262636 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ecea20}
	I1019 17:15:59.608544  262636 network_create.go:124] attempt to create docker network newest-cni-848035 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1019 17:15:59.608611  262636 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-848035 newest-cni-848035
	I1019 17:15:59.672766  262636 network_create.go:108] docker network newest-cni-848035 192.168.76.0/24 created
	I1019 17:15:59.672796  262636 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-848035" container
	I1019 17:15:59.672849  262636 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1019 17:15:59.691058  262636 cli_runner.go:164] Run: docker volume create newest-cni-848035 --label name.minikube.sigs.k8s.io=newest-cni-848035 --label created_by.minikube.sigs.k8s.io=true
	I1019 17:15:59.709827  262636 oci.go:103] Successfully created a docker volume newest-cni-848035
	I1019 17:15:59.709911  262636 cli_runner.go:164] Run: docker run --rm --name newest-cni-848035-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-848035 --entrypoint /usr/bin/test -v newest-cni-848035:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1019 17:16:00.096407  262636 oci.go:107] Successfully prepared a docker volume newest-cni-848035
	I1019 17:16:00.096464  262636 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:16:00.096485  262636 kic.go:194] Starting extracting preloaded images to volume ...
	I1019 17:16:00.096558  262636 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-3731/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-848035:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> CRI-O <==
	Oct 19 17:15:52 embed-certs-090139 crio[776]: time="2025-10-19T17:15:52.856113605Z" level=info msg="Starting container: afcd527184f171a97318d29a437aa7ea0c2cba8878393ef106d67c4f82731814" id=e946a0bc-27ee-4e3b-abb3-2b55da86d20d name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 17:15:52 embed-certs-090139 crio[776]: time="2025-10-19T17:15:52.858264166Z" level=info msg="Started container" PID=1838 containerID=afcd527184f171a97318d29a437aa7ea0c2cba8878393ef106d67c4f82731814 description=kube-system/coredns-66bc5c9577-zw7d8/coredns id=e946a0bc-27ee-4e3b-abb3-2b55da86d20d name=/runtime.v1.RuntimeService/StartContainer sandboxID=1b370d3daa0f6c0f7a6ed90cfb7316dd612de7a370f21923de3a1a63ff3c3454
	Oct 19 17:15:55 embed-certs-090139 crio[776]: time="2025-10-19T17:15:55.881952321Z" level=info msg="Running pod sandbox: default/busybox/POD" id=20ae4944-95ba-4a67-b2fd-ff28c117f1e0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 17:15:55 embed-certs-090139 crio[776]: time="2025-10-19T17:15:55.882078308Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:15:55 embed-certs-090139 crio[776]: time="2025-10-19T17:15:55.887287657Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:ec964f457b0e0f7b80a2f52d44b223d4b9dbf58d083993573ad842b10ffafbee UID:3863b530-fafc-49ad-aaf5-39e7efa20789 NetNS:/var/run/netns/6170c4ac-967f-4667-a69b-529e62f299da Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000132cb0}] Aliases:map[]}"
	Oct 19 17:15:55 embed-certs-090139 crio[776]: time="2025-10-19T17:15:55.887325792Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 19 17:15:55 embed-certs-090139 crio[776]: time="2025-10-19T17:15:55.898987906Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:ec964f457b0e0f7b80a2f52d44b223d4b9dbf58d083993573ad842b10ffafbee UID:3863b530-fafc-49ad-aaf5-39e7efa20789 NetNS:/var/run/netns/6170c4ac-967f-4667-a69b-529e62f299da Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000132cb0}] Aliases:map[]}"
	Oct 19 17:15:55 embed-certs-090139 crio[776]: time="2025-10-19T17:15:55.899132075Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 19 17:15:55 embed-certs-090139 crio[776]: time="2025-10-19T17:15:55.899936851Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 19 17:15:55 embed-certs-090139 crio[776]: time="2025-10-19T17:15:55.901195823Z" level=info msg="Ran pod sandbox ec964f457b0e0f7b80a2f52d44b223d4b9dbf58d083993573ad842b10ffafbee with infra container: default/busybox/POD" id=20ae4944-95ba-4a67-b2fd-ff28c117f1e0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 17:15:55 embed-certs-090139 crio[776]: time="2025-10-19T17:15:55.902524545Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a2b6b386-8816-41a3-a4b6-88e3c020fa10 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:15:55 embed-certs-090139 crio[776]: time="2025-10-19T17:15:55.902703079Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=a2b6b386-8816-41a3-a4b6-88e3c020fa10 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:15:55 embed-certs-090139 crio[776]: time="2025-10-19T17:15:55.902739826Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=a2b6b386-8816-41a3-a4b6-88e3c020fa10 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:15:55 embed-certs-090139 crio[776]: time="2025-10-19T17:15:55.90354484Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f7021efa-235b-4916-b92b-410b0eef8dc7 name=/runtime.v1.ImageService/PullImage
	Oct 19 17:15:55 embed-certs-090139 crio[776]: time="2025-10-19T17:15:55.907720116Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 19 17:15:56 embed-certs-090139 crio[776]: time="2025-10-19T17:15:56.662154472Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=f7021efa-235b-4916-b92b-410b0eef8dc7 name=/runtime.v1.ImageService/PullImage
	Oct 19 17:15:56 embed-certs-090139 crio[776]: time="2025-10-19T17:15:56.662954636Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=cea3f092-8c62-4917-8870-598727965872 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:15:56 embed-certs-090139 crio[776]: time="2025-10-19T17:15:56.664428934Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e342d5fc-0164-4898-a6f9-26f556023f00 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:15:56 embed-certs-090139 crio[776]: time="2025-10-19T17:15:56.667768855Z" level=info msg="Creating container: default/busybox/busybox" id=c107d34f-dfcb-4868-8e88-1c21ebbeedcd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:15:56 embed-certs-090139 crio[776]: time="2025-10-19T17:15:56.668556672Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:15:56 embed-certs-090139 crio[776]: time="2025-10-19T17:15:56.672208189Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:15:56 embed-certs-090139 crio[776]: time="2025-10-19T17:15:56.672674833Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:15:56 embed-certs-090139 crio[776]: time="2025-10-19T17:15:56.694961585Z" level=info msg="Created container 23f6bec67f7c36c06e5b2866ada3a0821dd4ec59293f9412a4e317e38c6d6965: default/busybox/busybox" id=c107d34f-dfcb-4868-8e88-1c21ebbeedcd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:15:56 embed-certs-090139 crio[776]: time="2025-10-19T17:15:56.695669473Z" level=info msg="Starting container: 23f6bec67f7c36c06e5b2866ada3a0821dd4ec59293f9412a4e317e38c6d6965" id=da91b76f-7eeb-4d1a-b720-b63ef056dcb7 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 17:15:56 embed-certs-090139 crio[776]: time="2025-10-19T17:15:56.697566893Z" level=info msg="Started container" PID=1911 containerID=23f6bec67f7c36c06e5b2866ada3a0821dd4ec59293f9412a4e317e38c6d6965 description=default/busybox/busybox id=da91b76f-7eeb-4d1a-b720-b63ef056dcb7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ec964f457b0e0f7b80a2f52d44b223d4b9dbf58d083993573ad842b10ffafbee
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	23f6bec67f7c3       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   ec964f457b0e0       busybox                                      default
	afcd527184f17       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago      Running             coredns                   0                   1b370d3daa0f6       coredns-66bc5c9577-zw7d8                     kube-system
	e312667c89859       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   ad6aeb071ff9c       storage-provisioner                          kube-system
	e33e69e7c7c45       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      23 seconds ago      Running             kube-proxy                0                   6d6280b8afbd4       kube-proxy-8f4lh                             kube-system
	056ca768bb786       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      23 seconds ago      Running             kindnet-cni               0                   49afd6ecad9dc       kindnet-dwsh7                                kube-system
	a0c8c089e80ef       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      34 seconds ago      Running             kube-controller-manager   0                   696db1da7b3c4       kube-controller-manager-embed-certs-090139   kube-system
	e9a4234b47255       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      34 seconds ago      Running             kube-scheduler            0                   d91bdf0e0b46d       kube-scheduler-embed-certs-090139            kube-system
	95d5cca03d25a       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      34 seconds ago      Running             etcd                      0                   df078be33e315       etcd-embed-certs-090139                      kube-system
	dd90a121a35ee       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      34 seconds ago      Running             kube-apiserver            0                   c45b2191f639f       kube-apiserver-embed-certs-090139            kube-system
	
	
	==> coredns [afcd527184f171a97318d29a437aa7ea0c2cba8878393ef106d67c4f82731814] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33010 - 27147 "HINFO IN 6368245549242559546.4849567940058971506. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.499175622s
	
	
	==> describe nodes <==
	Name:               embed-certs-090139
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-090139
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34
	                    minikube.k8s.io/name=embed-certs-090139
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T17_15_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 17:15:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-090139
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 17:15:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 17:15:56 +0000   Sun, 19 Oct 2025 17:15:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 17:15:56 +0000   Sun, 19 Oct 2025 17:15:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 17:15:56 +0000   Sun, 19 Oct 2025 17:15:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 17:15:56 +0000   Sun, 19 Oct 2025 17:15:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-090139
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                308b3de9-570c-4288-a8e0-c3790dfe5ce4
	  Boot ID:                    74c6737e-3c93-45ee-b810-4373d2d70b9b
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-zw7d8                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-embed-certs-090139                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-dwsh7                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-embed-certs-090139             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-embed-certs-090139    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-8f4lh                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-embed-certs-090139             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23s                kube-proxy       
	  Normal  Starting                 35s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  35s (x8 over 35s)  kubelet          Node embed-certs-090139 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    35s (x8 over 35s)  kubelet          Node embed-certs-090139 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     35s (x8 over 35s)  kubelet          Node embed-certs-090139 status is now: NodeHasSufficientPID
	  Normal  Starting                 30s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s                kubelet          Node embed-certs-090139 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s                kubelet          Node embed-certs-090139 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s                kubelet          Node embed-certs-090139 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s                node-controller  Node embed-certs-090139 event: Registered Node embed-certs-090139 in Controller
	  Normal  NodeReady                13s                kubelet          Node embed-certs-090139 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.102617] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027869] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.458770] kauditd_printk_skb: 47 callbacks suppressed
	[Oct19 16:23] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.054620] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023908] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023878] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023848] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023943] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +2.047764] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +4.031517] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +8.127172] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[ +16.382276] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[Oct19 16:24] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	
	
	==> etcd [95d5cca03d25a299f74fb958d2505c647422c7e97b31906a9b33627e1234d2d2] <==
	{"level":"info","ts":"2025-10-19T17:15:36.396268Z","caller":"traceutil/trace.go:172","msg":"trace[382502383] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:293; }","duration":"306.430261ms","start":"2025-10-19T17:15:36.089831Z","end":"2025-10-19T17:15:36.396262Z","steps":["trace[382502383] 'agreement among raft nodes before linearized reading'  (duration: 306.368437ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-19T17:15:36.396288Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-19T17:15:36.089810Z","time spent":"306.473365ms","remote":"127.0.0.1:43758","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2025-10-19T17:15:36.524460Z","caller":"traceutil/trace.go:172","msg":"trace[1448768972] linearizableReadLoop","detail":"{readStateIndex:300; appliedIndex:300; }","duration":"111.897136ms","start":"2025-10-19T17:15:36.412539Z","end":"2025-10-19T17:15:36.524436Z","steps":["trace[1448768972] 'read index received'  (duration: 111.886796ms)","trace[1448768972] 'applied index is now lower than readState.Index'  (duration: 8.614µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-19T17:15:36.599187Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"186.628995ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/service-account-controller\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-19T17:15:36.599261Z","caller":"traceutil/trace.go:172","msg":"trace[912747085] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/service-account-controller; range_end:; response_count:0; response_revision:293; }","duration":"186.716355ms","start":"2025-10-19T17:15:36.412527Z","end":"2025-10-19T17:15:36.599243Z","steps":["trace[912747085] 'agreement among raft nodes before linearized reading'  (duration: 112.01333ms)","trace[912747085] 'range keys from in-memory index tree'  (duration: 74.576912ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-19T17:15:36.599367Z","caller":"traceutil/trace.go:172","msg":"trace[203357468] transaction","detail":"{read_only:false; response_revision:294; number_of_response:1; }","duration":"199.487735ms","start":"2025-10-19T17:15:36.399857Z","end":"2025-10-19T17:15:36.599345Z","steps":["trace[203357468] 'process raft request'  (duration: 124.628405ms)","trace[203357468] 'compare'  (duration: 74.667233ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-19T17:15:36.599386Z","caller":"traceutil/trace.go:172","msg":"trace[1249657382] transaction","detail":"{read_only:false; number_of_response:0; response_revision:294; }","duration":"154.578227ms","start":"2025-10-19T17:15:36.444794Z","end":"2025-10-19T17:15:36.599372Z","steps":["trace[1249657382] 'process raft request'  (duration: 154.488739ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-19T17:15:36.599454Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"155.732135ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-embed-certs-090139\" limit:1 ","response":"range_response_count:1 size:5862"}
	{"level":"info","ts":"2025-10-19T17:15:36.599490Z","caller":"traceutil/trace.go:172","msg":"trace[504683407] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-embed-certs-090139; range_end:; response_count:1; response_revision:294; }","duration":"155.77615ms","start":"2025-10-19T17:15:36.443704Z","end":"2025-10-19T17:15:36.599480Z","steps":["trace[504683407] 'agreement among raft nodes before linearized reading'  (duration: 155.641384ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T17:15:36.599565Z","caller":"traceutil/trace.go:172","msg":"trace[884928904] transaction","detail":"{read_only:false; number_of_response:0; response_revision:294; }","duration":"154.504694ms","start":"2025-10-19T17:15:36.445053Z","end":"2025-10-19T17:15:36.599558Z","steps":["trace[884928904] 'process raft request'  (duration: 154.331486ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T17:15:36.599571Z","caller":"traceutil/trace.go:172","msg":"trace[648623779] transaction","detail":"{read_only:false; number_of_response:0; response_revision:294; }","duration":"154.720685ms","start":"2025-10-19T17:15:36.444841Z","end":"2025-10-19T17:15:36.599562Z","steps":["trace[648623779] 'process raft request'  (duration: 154.47491ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T17:15:36.599625Z","caller":"traceutil/trace.go:172","msg":"trace[2116301271] transaction","detail":"{read_only:false; number_of_response:0; response_revision:294; }","duration":"154.767919ms","start":"2025-10-19T17:15:36.444850Z","end":"2025-10-19T17:15:36.599618Z","steps":["trace[2116301271] 'process raft request'  (duration: 154.513386ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T17:15:36.730784Z","caller":"traceutil/trace.go:172","msg":"trace[713520646] linearizableReadLoop","detail":"{readStateIndex:305; appliedIndex:305; }","duration":"126.614382ms","start":"2025-10-19T17:15:36.604147Z","end":"2025-10-19T17:15:36.730761Z","steps":["trace[713520646] 'read index received'  (duration: 126.602082ms)","trace[713520646] 'applied index is now lower than readState.Index'  (duration: 10.931µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-19T17:15:36.776877Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"172.704052ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-embed-certs-090139\" limit:1 ","response":"range_response_count:1 size:3389"}
	{"level":"info","ts":"2025-10-19T17:15:36.776944Z","caller":"traceutil/trace.go:172","msg":"trace[1954212130] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-embed-certs-090139; range_end:; response_count:1; response_revision:294; }","duration":"172.785407ms","start":"2025-10-19T17:15:36.604141Z","end":"2025-10-19T17:15:36.776926Z","steps":["trace[1954212130] 'agreement among raft nodes before linearized reading'  (duration: 126.691173ms)","trace[1954212130] 'range keys from in-memory index tree'  (duration: 45.918733ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-19T17:15:36.776973Z","caller":"traceutil/trace.go:172","msg":"trace[926033049] transaction","detail":"{read_only:false; response_revision:295; number_of_response:1; }","duration":"172.895337ms","start":"2025-10-19T17:15:36.604054Z","end":"2025-10-19T17:15:36.776949Z","steps":["trace[926033049] 'process raft request'  (duration: 126.770292ms)","trace[926033049] 'compare'  (duration: 45.948869ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-19T17:15:36.777354Z","caller":"traceutil/trace.go:172","msg":"trace[1109522349] transaction","detail":"{read_only:false; response_revision:296; number_of_response:1; }","duration":"171.351067ms","start":"2025-10-19T17:15:36.605995Z","end":"2025-10-19T17:15:36.777346Z","steps":["trace[1109522349] 'process raft request'  (duration: 171.242039ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T17:15:36.777531Z","caller":"traceutil/trace.go:172","msg":"trace[2106956038] transaction","detail":"{read_only:false; response_revision:297; number_of_response:1; }","duration":"160.347388ms","start":"2025-10-19T17:15:36.617128Z","end":"2025-10-19T17:15:36.777476Z","steps":["trace[2106956038] 'process raft request'  (duration: 160.198263ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-19T17:15:37.163990Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"160.398491ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873789441012392377 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-embed-certs-090139\" mod_revision:277 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-embed-certs-090139\" value_size:4729 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-embed-certs-090139\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-19T17:15:37.164174Z","caller":"traceutil/trace.go:172","msg":"trace[1967196050] transaction","detail":"{read_only:false; response_revision:303; number_of_response:1; }","duration":"297.843601ms","start":"2025-10-19T17:15:36.866319Z","end":"2025-10-19T17:15:37.164163Z","steps":["trace[1967196050] 'process raft request'  (duration: 297.773751ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T17:15:37.164252Z","caller":"traceutil/trace.go:172","msg":"trace[1083628510] transaction","detail":"{read_only:false; response_revision:302; number_of_response:1; }","duration":"298.385652ms","start":"2025-10-19T17:15:36.865719Z","end":"2025-10-19T17:15:37.164105Z","steps":["trace[1083628510] 'process raft request'  (duration: 137.613729ms)","trace[1083628510] 'compare'  (duration: 160.305495ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-19T17:15:37.525109Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"148.276804ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873789441012392387 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/ttl-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/ttl-controller\" value_size:119 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-10-19T17:15:37.525199Z","caller":"traceutil/trace.go:172","msg":"trace[1722691373] transaction","detail":"{read_only:false; response_revision:306; number_of_response:1; }","duration":"264.33491ms","start":"2025-10-19T17:15:37.260839Z","end":"2025-10-19T17:15:37.525174Z","steps":["trace[1722691373] 'process raft request'  (duration: 115.921851ms)","trace[1722691373] 'compare'  (duration: 148.124677ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-19T17:15:37.541998Z","caller":"traceutil/trace.go:172","msg":"trace[592945277] transaction","detail":"{read_only:false; response_revision:307; number_of_response:1; }","duration":"277.251709ms","start":"2025-10-19T17:15:37.264728Z","end":"2025-10-19T17:15:37.541979Z","steps":["trace[592945277] 'process raft request'  (duration: 277.137276ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T17:16:04.368904Z","caller":"traceutil/trace.go:172","msg":"trace[1785494380] transaction","detail":"{read_only:false; response_revision:468; number_of_response:1; }","duration":"119.126976ms","start":"2025-10-19T17:16:04.249758Z","end":"2025-10-19T17:16:04.368885Z","steps":["trace[1785494380] 'process raft request'  (duration: 119.009715ms)"],"step_count":1}
	
	
	==> kernel <==
	 17:16:05 up 58 min,  0 user,  load average: 3.78, 3.00, 1.82
	Linux embed-certs-090139 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [056ca768bb786ff1e5144ddcb2170664ece1b2285781b89de454cf845272a181] <==
	I1019 17:15:41.798983       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 17:15:41.799321       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1019 17:15:41.799490       1 main.go:148] setting mtu 1500 for CNI 
	I1019 17:15:41.799514       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 17:15:41.799538       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T17:15:42Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 17:15:42.004061       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 17:15:42.035754       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 17:15:42.035773       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 17:15:42.035971       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1019 17:15:42.336162       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 17:15:42.336199       1 metrics.go:72] Registering metrics
	I1019 17:15:42.336310       1 controller.go:711] "Syncing nftables rules"
	I1019 17:15:52.006196       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1019 17:15:52.006263       1 main.go:301] handling current node
	I1019 17:16:02.006149       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1019 17:16:02.006184       1 main.go:301] handling current node
	
	
	==> kube-apiserver [dd90a121a35ee0d743cc5d3d68d8c36d84f75c97cf736554ec16f6e08742f4f1] <==
	I1019 17:15:33.193727       1 autoregister_controller.go:144] Starting autoregister controller
	I1019 17:15:33.193735       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1019 17:15:33.193741       1 cache.go:39] Caches are synced for autoregister controller
	I1019 17:15:33.196712       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1019 17:15:33.197215       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1019 17:15:33.217810       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 17:15:33.386227       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 17:15:34.085041       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1019 17:15:34.089826       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1019 17:15:34.089844       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 17:15:34.633261       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 17:15:34.674630       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 17:15:34.787497       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1019 17:15:34.794536       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1019 17:15:34.795623       1 controller.go:667] quota admission added evaluator for: endpoints
	I1019 17:15:34.802060       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 17:15:35.125233       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 17:15:35.794822       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1019 17:15:35.987221       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1019 17:15:36.001048       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1019 17:15:40.177468       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1019 17:15:41.029827       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 17:15:41.034035       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 17:15:41.127503       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1019 17:16:03.665711       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8443->192.168.103.1:41324: use of closed network connection
	
	
	==> kube-controller-manager [a0c8c089e80ef4afd001af81e17c22c64f8a17f0bd3013d78b18baf30a66b57f] <==
	I1019 17:15:40.095888       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1019 17:15:40.097999       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1019 17:15:40.112298       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1019 17:15:40.123462       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1019 17:15:40.124656       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1019 17:15:40.124684       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1019 17:15:40.124933       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1019 17:15:40.124974       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1019 17:15:40.127922       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1019 17:15:40.128028       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1019 17:15:40.129129       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 17:15:40.129207       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1019 17:15:40.129216       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1019 17:15:40.129322       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1019 17:15:40.129362       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 17:15:40.129375       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1019 17:15:40.129383       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1019 17:15:40.130814       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1019 17:15:40.138143       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1019 17:15:40.138167       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1019 17:15:40.138303       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1019 17:15:40.138440       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-090139"
	I1019 17:15:40.138488       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1019 17:15:40.145904       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 17:15:55.140490       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [e33e69e7c7c459510fe0e4fc387e07b826f77ccd881ad4e147443bafc5f3a04b] <==
	I1019 17:15:41.589550       1 server_linux.go:53] "Using iptables proxy"
	I1019 17:15:41.657521       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 17:15:41.758499       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 17:15:41.758541       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1019 17:15:41.758624       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 17:15:41.777609       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 17:15:41.777669       1 server_linux.go:132] "Using iptables Proxier"
	I1019 17:15:41.783202       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 17:15:41.783586       1 server.go:527] "Version info" version="v1.34.1"
	I1019 17:15:41.783616       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:15:41.785765       1 config.go:200] "Starting service config controller"
	I1019 17:15:41.785797       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 17:15:41.785818       1 config.go:106] "Starting endpoint slice config controller"
	I1019 17:15:41.785824       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 17:15:41.785844       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 17:15:41.785851       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 17:15:41.787152       1 config.go:309] "Starting node config controller"
	I1019 17:15:41.787174       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 17:15:41.787183       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 17:15:41.886860       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 17:15:41.886894       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1019 17:15:41.886893       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [e9a4234b472556514a1b6b08297f50a68b2dbcb10e2d6d6895e14286653e3d37] <==
	E1019 17:15:33.142714       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1019 17:15:33.142731       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1019 17:15:33.142764       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1019 17:15:33.142825       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1019 17:15:33.142831       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1019 17:15:33.142895       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1019 17:15:33.142893       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1019 17:15:33.143095       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1019 17:15:33.143117       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1019 17:15:33.143117       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 17:15:33.988951       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1019 17:15:34.031583       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1019 17:15:34.094247       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1019 17:15:34.111727       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1019 17:15:34.170706       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1019 17:15:34.171800       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1019 17:15:34.177101       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1019 17:15:34.177110       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1019 17:15:34.184535       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1019 17:15:34.220779       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1019 17:15:34.220887       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1019 17:15:34.359397       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1019 17:15:34.378510       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1019 17:15:34.412774       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I1019 17:15:36.639552       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 19 17:15:36 embed-certs-090139 kubelet[1332]: I1019 17:15:36.858524    1332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-090139" podStartSLOduration=1.858513629 podStartE2EDuration="1.858513629s" podCreationTimestamp="2025-10-19 17:15:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:15:36.858441565 +0000 UTC m=+1.534086168" watchObservedRunningTime="2025-10-19 17:15:36.858513629 +0000 UTC m=+1.534158238"
	Oct 19 17:15:37 embed-certs-090139 kubelet[1332]: I1019 17:15:37.166175    1332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-090139" podStartSLOduration=2.166149904 podStartE2EDuration="2.166149904s" podCreationTimestamp="2025-10-19 17:15:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:15:37.165936481 +0000 UTC m=+1.841581086" watchObservedRunningTime="2025-10-19 17:15:37.166149904 +0000 UTC m=+1.841794505"
	Oct 19 17:15:37 embed-certs-090139 kubelet[1332]: I1019 17:15:37.543424    1332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-090139" podStartSLOduration=3.543402575 podStartE2EDuration="3.543402575s" podCreationTimestamp="2025-10-19 17:15:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:15:37.257709447 +0000 UTC m=+1.933354054" watchObservedRunningTime="2025-10-19 17:15:37.543402575 +0000 UTC m=+2.219047180"
	Oct 19 17:15:40 embed-certs-090139 kubelet[1332]: I1019 17:15:40.091337    1332 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 19 17:15:40 embed-certs-090139 kubelet[1332]: I1019 17:15:40.092184    1332 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 19 17:15:41 embed-certs-090139 kubelet[1332]: I1019 17:15:41.244668    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5baffb03-44e9-4304-a146-40598b517031-lib-modules\") pod \"kube-proxy-8f4lh\" (UID: \"5baffb03-44e9-4304-a146-40598b517031\") " pod="kube-system/kube-proxy-8f4lh"
	Oct 19 17:15:41 embed-certs-090139 kubelet[1332]: I1019 17:15:41.244715    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e081eba9-4c2c-401b-84d2-1bfdd53460e9-lib-modules\") pod \"kindnet-dwsh7\" (UID: \"e081eba9-4c2c-401b-84d2-1bfdd53460e9\") " pod="kube-system/kindnet-dwsh7"
	Oct 19 17:15:41 embed-certs-090139 kubelet[1332]: I1019 17:15:41.244736    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5baffb03-44e9-4304-a146-40598b517031-kube-proxy\") pod \"kube-proxy-8f4lh\" (UID: \"5baffb03-44e9-4304-a146-40598b517031\") " pod="kube-system/kube-proxy-8f4lh"
	Oct 19 17:15:41 embed-certs-090139 kubelet[1332]: I1019 17:15:41.244806    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrfg8\" (UniqueName: \"kubernetes.io/projected/5baffb03-44e9-4304-a146-40598b517031-kube-api-access-wrfg8\") pod \"kube-proxy-8f4lh\" (UID: \"5baffb03-44e9-4304-a146-40598b517031\") " pod="kube-system/kube-proxy-8f4lh"
	Oct 19 17:15:41 embed-certs-090139 kubelet[1332]: I1019 17:15:41.244883    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/e081eba9-4c2c-401b-84d2-1bfdd53460e9-cni-cfg\") pod \"kindnet-dwsh7\" (UID: \"e081eba9-4c2c-401b-84d2-1bfdd53460e9\") " pod="kube-system/kindnet-dwsh7"
	Oct 19 17:15:41 embed-certs-090139 kubelet[1332]: I1019 17:15:41.244937    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e081eba9-4c2c-401b-84d2-1bfdd53460e9-xtables-lock\") pod \"kindnet-dwsh7\" (UID: \"e081eba9-4c2c-401b-84d2-1bfdd53460e9\") " pod="kube-system/kindnet-dwsh7"
	Oct 19 17:15:41 embed-certs-090139 kubelet[1332]: I1019 17:15:41.244960    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l75jb\" (UniqueName: \"kubernetes.io/projected/e081eba9-4c2c-401b-84d2-1bfdd53460e9-kube-api-access-l75jb\") pod \"kindnet-dwsh7\" (UID: \"e081eba9-4c2c-401b-84d2-1bfdd53460e9\") " pod="kube-system/kindnet-dwsh7"
	Oct 19 17:15:41 embed-certs-090139 kubelet[1332]: I1019 17:15:41.244988    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5baffb03-44e9-4304-a146-40598b517031-xtables-lock\") pod \"kube-proxy-8f4lh\" (UID: \"5baffb03-44e9-4304-a146-40598b517031\") " pod="kube-system/kube-proxy-8f4lh"
	Oct 19 17:15:42 embed-certs-090139 kubelet[1332]: I1019 17:15:42.467739    1332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8f4lh" podStartSLOduration=1.467621349 podStartE2EDuration="1.467621349s" podCreationTimestamp="2025-10-19 17:15:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:15:42.467516574 +0000 UTC m=+7.143161179" watchObservedRunningTime="2025-10-19 17:15:42.467621349 +0000 UTC m=+7.143265954"
	Oct 19 17:15:42 embed-certs-090139 kubelet[1332]: I1019 17:15:42.477862    1332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-dwsh7" podStartSLOduration=1.477837488 podStartE2EDuration="1.477837488s" podCreationTimestamp="2025-10-19 17:15:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:15:42.477569684 +0000 UTC m=+7.153214291" watchObservedRunningTime="2025-10-19 17:15:42.477837488 +0000 UTC m=+7.153482093"
	Oct 19 17:15:52 embed-certs-090139 kubelet[1332]: I1019 17:15:52.466991    1332 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 19 17:15:52 embed-certs-090139 kubelet[1332]: I1019 17:15:52.524152    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/761c74ff-17e1-44c3-b64d-dd9c9f9863d0-tmp\") pod \"storage-provisioner\" (UID: \"761c74ff-17e1-44c3-b64d-dd9c9f9863d0\") " pod="kube-system/storage-provisioner"
	Oct 19 17:15:52 embed-certs-090139 kubelet[1332]: I1019 17:15:52.524231    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e1cb390d-b0bd-4da0-9e8a-92250e2485cf-config-volume\") pod \"coredns-66bc5c9577-zw7d8\" (UID: \"e1cb390d-b0bd-4da0-9e8a-92250e2485cf\") " pod="kube-system/coredns-66bc5c9577-zw7d8"
	Oct 19 17:15:52 embed-certs-090139 kubelet[1332]: I1019 17:15:52.524265    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhm8k\" (UniqueName: \"kubernetes.io/projected/e1cb390d-b0bd-4da0-9e8a-92250e2485cf-kube-api-access-mhm8k\") pod \"coredns-66bc5c9577-zw7d8\" (UID: \"e1cb390d-b0bd-4da0-9e8a-92250e2485cf\") " pod="kube-system/coredns-66bc5c9577-zw7d8"
	Oct 19 17:15:52 embed-certs-090139 kubelet[1332]: I1019 17:15:52.524290    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w5np\" (UniqueName: \"kubernetes.io/projected/761c74ff-17e1-44c3-b64d-dd9c9f9863d0-kube-api-access-4w5np\") pod \"storage-provisioner\" (UID: \"761c74ff-17e1-44c3-b64d-dd9c9f9863d0\") " pod="kube-system/storage-provisioner"
	Oct 19 17:15:53 embed-certs-090139 kubelet[1332]: I1019 17:15:53.495863    1332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.495848144 podStartE2EDuration="12.495848144s" podCreationTimestamp="2025-10-19 17:15:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:15:53.495755451 +0000 UTC m=+18.171400056" watchObservedRunningTime="2025-10-19 17:15:53.495848144 +0000 UTC m=+18.171492751"
	Oct 19 17:15:55 embed-certs-090139 kubelet[1332]: I1019 17:15:55.574955    1332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-zw7d8" podStartSLOduration=14.574926537 podStartE2EDuration="14.574926537s" podCreationTimestamp="2025-10-19 17:15:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:15:53.507404797 +0000 UTC m=+18.183049401" watchObservedRunningTime="2025-10-19 17:15:55.574926537 +0000 UTC m=+20.250571144"
	Oct 19 17:15:55 embed-certs-090139 kubelet[1332]: I1019 17:15:55.645809    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2tdq\" (UniqueName: \"kubernetes.io/projected/3863b530-fafc-49ad-aaf5-39e7efa20789-kube-api-access-d2tdq\") pod \"busybox\" (UID: \"3863b530-fafc-49ad-aaf5-39e7efa20789\") " pod="default/busybox"
	Oct 19 17:15:57 embed-certs-090139 kubelet[1332]: I1019 17:15:57.507965    1332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.747166325 podStartE2EDuration="2.507941284s" podCreationTimestamp="2025-10-19 17:15:55 +0000 UTC" firstStartedPulling="2025-10-19 17:15:55.903013464 +0000 UTC m=+20.578658048" lastFinishedPulling="2025-10-19 17:15:56.66378842 +0000 UTC m=+21.339433007" observedRunningTime="2025-10-19 17:15:57.50787034 +0000 UTC m=+22.183514946" watchObservedRunningTime="2025-10-19 17:15:57.507941284 +0000 UTC m=+22.183585889"
	Oct 19 17:16:03 embed-certs-090139 kubelet[1332]: E1019 17:16:03.665603    1332 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:48510->127.0.0.1:41189: write tcp 127.0.0.1:48510->127.0.0.1:41189: write: broken pipe
	
	
	==> storage-provisioner [e312667c898599551197b36397dd16442daac0b6e51a2a536c67d42f06d57f5c] <==
	I1019 17:15:52.863003       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1019 17:15:52.872900       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1019 17:15:52.872944       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1019 17:15:52.875456       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:15:52.883291       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 17:15:52.883433       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1019 17:15:52.883579       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-090139_457ac6fb-cc1a-4f57-97ad-f2ffa9109ec2!
	I1019 17:15:52.883590       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"abf9a435-53d4-45a2-bf52-58f629c09914", APIVersion:"v1", ResourceVersion:"442", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-090139_457ac6fb-cc1a-4f57-97ad-f2ffa9109ec2 became leader
	W1019 17:15:52.886009       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:15:52.890495       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 17:15:52.984101       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-090139_457ac6fb-cc1a-4f57-97ad-f2ffa9109ec2!
	W1019 17:15:54.893964       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:15:54.898618       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:15:56.901756       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:15:56.906718       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:15:58.910178       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:15:58.916850       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:16:00.920950       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:16:00.925309       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:16:02.928498       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:16:02.941225       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:16:04.945601       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:16:04.951133       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-090139 -n embed-certs-090139
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-090139 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-663015 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-663015 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (261.454762ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:16:19Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-663015 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-663015 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-663015 describe deploy/metrics-server -n kube-system: exit status 1 (69.40964ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-663015 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-663015
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-663015:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8abacb4fd440310264bf5826aa4e098ead2c9c051555dedfb3490d558304cfa1",
	        "Created": "2025-10-19T17:15:37.665155013Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 257219,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T17:15:37.722632814Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/8abacb4fd440310264bf5826aa4e098ead2c9c051555dedfb3490d558304cfa1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8abacb4fd440310264bf5826aa4e098ead2c9c051555dedfb3490d558304cfa1/hostname",
	        "HostsPath": "/var/lib/docker/containers/8abacb4fd440310264bf5826aa4e098ead2c9c051555dedfb3490d558304cfa1/hosts",
	        "LogPath": "/var/lib/docker/containers/8abacb4fd440310264bf5826aa4e098ead2c9c051555dedfb3490d558304cfa1/8abacb4fd440310264bf5826aa4e098ead2c9c051555dedfb3490d558304cfa1-json.log",
	        "Name": "/default-k8s-diff-port-663015",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-663015:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-663015",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8abacb4fd440310264bf5826aa4e098ead2c9c051555dedfb3490d558304cfa1",
	                "LowerDir": "/var/lib/docker/overlay2/c0ced4b65fd57ff4829b7f08104e8b5cd0e9cd252b29c14f2eeaa24cc6489ede-init/diff:/var/lib/docker/overlay2/0516a6de822f96a429e000bb6523437ca93a66a6aecaba561c6abf45d0d1defe/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c0ced4b65fd57ff4829b7f08104e8b5cd0e9cd252b29c14f2eeaa24cc6489ede/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c0ced4b65fd57ff4829b7f08104e8b5cd0e9cd252b29c14f2eeaa24cc6489ede/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c0ced4b65fd57ff4829b7f08104e8b5cd0e9cd252b29c14f2eeaa24cc6489ede/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-663015",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-663015/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-663015",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-663015",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-663015",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4f8b9690eb420fb2972e1217caa0f32d348f4baeb2a44dc7c7a70564b1f4a3ba",
	            "SandboxKey": "/var/run/docker/netns/4f8b9690eb42",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-663015": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:48:c1:04:9f:5e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "11e31399831af0dccd7c897515d1d7c4e22e31f4e5da333490f417dfbabfda44",
	                    "EndpointID": "1f8379345922b1ba6f121d8f5a9f44c26371f2fd98e03a1f583c7eed5a3db3e4",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-663015",
	                        "8abacb4fd440"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-663015 -n default-k8s-diff-port-663015
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-663015 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-663015 logs -n 25: (1.097060772s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p old-k8s-version-904967 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-904967       │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │                     │
	│ stop    │ -p old-k8s-version-904967 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-904967       │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │ 19 Oct 25 17:14 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-904967 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-904967       │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │ 19 Oct 25 17:14 UTC │
	│ start   │ -p old-k8s-version-904967 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-904967       │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │ 19 Oct 25 17:15 UTC │
	│ addons  │ enable metrics-server -p no-preload-806996 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-806996            │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │                     │
	│ stop    │ -p no-preload-806996 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-806996            │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │ 19 Oct 25 17:14 UTC │
	│ addons  │ enable dashboard -p no-preload-806996 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-806996            │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │ 19 Oct 25 17:14 UTC │
	│ start   │ -p no-preload-806996 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-806996            │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │ 19 Oct 25 17:15 UTC │
	│ start   │ -p cert-expiration-132648 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-132648       │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ delete  │ -p cert-expiration-132648                                                                                                                                                                                                                     │ cert-expiration-132648       │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ start   │ -p embed-certs-090139 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-090139           │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ image   │ old-k8s-version-904967 image list --format=json                                                                                                                                                                                               │ old-k8s-version-904967       │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ pause   │ -p old-k8s-version-904967 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-904967       │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │                     │
	│ delete  │ -p old-k8s-version-904967                                                                                                                                                                                                                     │ old-k8s-version-904967       │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ delete  │ -p old-k8s-version-904967                                                                                                                                                                                                                     │ old-k8s-version-904967       │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ delete  │ -p disable-driver-mounts-858297                                                                                                                                                                                                               │ disable-driver-mounts-858297 │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ start   │ -p default-k8s-diff-port-663015 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-663015 │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:16 UTC │
	│ image   │ no-preload-806996 image list --format=json                                                                                                                                                                                                    │ no-preload-806996            │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ pause   │ -p no-preload-806996 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-806996            │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │                     │
	│ delete  │ -p no-preload-806996                                                                                                                                                                                                                          │ no-preload-806996            │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ delete  │ -p no-preload-806996                                                                                                                                                                                                                          │ no-preload-806996            │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ start   │ -p newest-cni-848035 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-848035            │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-090139 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-090139           │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ stop    │ -p embed-certs-090139 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-090139           │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-663015 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-663015 │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 17:15:59
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 17:15:59.322544  262636 out.go:360] Setting OutFile to fd 1 ...
	I1019 17:15:59.322692  262636 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:15:59.322702  262636 out.go:374] Setting ErrFile to fd 2...
	I1019 17:15:59.322709  262636 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:15:59.322927  262636 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3731/.minikube/bin
	I1019 17:15:59.323472  262636 out.go:368] Setting JSON to false
	I1019 17:15:59.325015  262636 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3505,"bootTime":1760890654,"procs":343,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 17:15:59.325089  262636 start.go:143] virtualization: kvm guest
	I1019 17:15:59.327123  262636 out.go:179] * [newest-cni-848035] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 17:15:59.329230  262636 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 17:15:59.329273  262636 notify.go:221] Checking for updates...
	I1019 17:15:59.332618  262636 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 17:15:59.334099  262636 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-3731/kubeconfig
	I1019 17:15:59.335407  262636 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-3731/.minikube
	I1019 17:15:59.336570  262636 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 17:15:59.337968  262636 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 17:15:59.339533  262636 config.go:182] Loaded profile config "default-k8s-diff-port-663015": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:15:59.339667  262636 config.go:182] Loaded profile config "embed-certs-090139": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:15:59.339792  262636 config.go:182] Loaded profile config "kubernetes-upgrade-318879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:15:59.339901  262636 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 17:15:59.368133  262636 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1019 17:15:59.368214  262636 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:15:59.440031  262636 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-19 17:15:59.42241722 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 17:15:59.440197  262636 docker.go:319] overlay module found
	I1019 17:15:59.444982  262636 out.go:179] * Using the docker driver based on user configuration
	I1019 17:15:58.935258  256207 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-663015"
	I1019 17:15:58.935301  256207 host.go:66] Checking if "default-k8s-diff-port-663015" exists ...
	I1019 17:15:58.935764  256207 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-663015 --format={{.State.Status}}
	I1019 17:15:58.936375  256207 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:15:58.936393  256207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 17:15:58.936455  256207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-663015
	I1019 17:15:58.966134  256207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33079 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/default-k8s-diff-port-663015/id_rsa Username:docker}
	I1019 17:15:58.975246  256207 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 17:15:58.975271  256207 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 17:15:58.975334  256207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-663015
	I1019 17:15:59.006504  256207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33079 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/default-k8s-diff-port-663015/id_rsa Username:docker}
	I1019 17:15:59.027604  256207 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1019 17:15:59.104432  256207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:15:59.104890  256207 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:15:59.125546  256207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 17:15:59.240964  256207 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1019 17:15:59.481139  256207 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-663015" to be "Ready" ...
	I1019 17:15:59.482157  256207 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1019 17:15:59.446341  262636 start.go:309] selected driver: docker
	I1019 17:15:59.446424  262636 start.go:930] validating driver "docker" against <nil>
	I1019 17:15:59.446443  262636 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 17:15:59.447427  262636 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:15:59.513489  262636 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-19 17:15:59.50380274 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 17:15:59.513666  262636 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1019 17:15:59.513700  262636 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1019 17:15:59.513991  262636 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1019 17:15:59.515437  262636 out.go:179] * Using Docker driver with root privileges
	I1019 17:15:59.516540  262636 cni.go:84] Creating CNI manager for ""
	I1019 17:15:59.516619  262636 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:15:59.516633  262636 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1019 17:15:59.516756  262636 start.go:353] cluster config:
	{Name:newest-cni-848035 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-848035 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:15:59.518966  262636 out.go:179] * Starting "newest-cni-848035" primary control-plane node in "newest-cni-848035" cluster
	I1019 17:15:59.520393  262636 cache.go:124] Beginning downloading kic base image for docker with crio
	I1019 17:15:59.521668  262636 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 17:15:59.522998  262636 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:15:59.523036  262636 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 17:15:59.523043  262636 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-3731/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1019 17:15:59.523081  262636 cache.go:59] Caching tarball of preloaded images
	I1019 17:15:59.523178  262636 preload.go:233] Found /home/jenkins/minikube-integration/21683-3731/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1019 17:15:59.523194  262636 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 17:15:59.523301  262636 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/newest-cni-848035/config.json ...
	I1019 17:15:59.523328  262636 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/newest-cni-848035/config.json: {Name:mk46f943bf6dbbc8e42c314c2013533e30a03ab8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:15:59.544936  262636 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 17:15:59.544955  262636 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 17:15:59.544972  262636 cache.go:233] Successfully downloaded all kic artifacts
	I1019 17:15:59.545002  262636 start.go:360] acquireMachinesLock for newest-cni-848035: {Name:mk73020b94db81f5952879aa2f581596a932c88c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 17:15:59.545152  262636 start.go:364] duration metric: took 117.087µs to acquireMachinesLock for "newest-cni-848035"
	I1019 17:15:59.545188  262636 start.go:93] Provisioning new machine with config: &{Name:newest-cni-848035 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-848035 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 17:15:59.545250  262636 start.go:125] createHost starting for "" (driver="docker")
	I1019 17:15:59.483617  256207 addons.go:515] duration metric: took 576.969555ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1019 17:15:59.748741  256207 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-663015" context rescaled to 1 replicas
	W1019 17:16:01.484640  256207 node_ready.go:57] node "default-k8s-diff-port-663015" has "Ready":"False" status (will retry)
	I1019 17:15:58.515343  219832 cri.go:89] found id: ""
	I1019 17:15:58.515372  219832 logs.go:282] 0 containers: []
	W1019 17:15:58.515381  219832 logs.go:284] No container was found matching "kindnet"
	I1019 17:15:58.515388  219832 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1019 17:15:58.515462  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1019 17:15:58.546610  219832 cri.go:89] found id: ""
	I1019 17:15:58.546632  219832 logs.go:282] 0 containers: []
	W1019 17:15:58.546640  219832 logs.go:284] No container was found matching "storage-provisioner"
	I1019 17:15:58.546654  219832 logs.go:123] Gathering logs for dmesg ...
	I1019 17:15:58.546676  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1019 17:15:58.561879  219832 logs.go:123] Gathering logs for kube-apiserver [55181ab12a75f0828e8a499b545960e5dfe82c07024572a8d9b4566b0da1fba4] ...
	I1019 17:15:58.561909  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 55181ab12a75f0828e8a499b545960e5dfe82c07024572a8d9b4566b0da1fba4"
	I1019 17:15:58.599537  219832 logs.go:123] Gathering logs for kube-apiserver [9069a5be85580c182da77b88270f9798850d1b399012f0273ac8911e2aca89dc] ...
	I1019 17:15:58.599572  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9069a5be85580c182da77b88270f9798850d1b399012f0273ac8911e2aca89dc"
	I1019 17:15:58.635426  219832 logs.go:123] Gathering logs for kube-controller-manager [fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa] ...
	I1019 17:15:58.635462  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa"
	I1019 17:15:58.667022  219832 logs.go:123] Gathering logs for kubelet ...
	I1019 17:15:58.667052  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1019 17:15:58.796453  219832 logs.go:123] Gathering logs for describe nodes ...
	I1019 17:15:58.796484  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1019 17:15:58.866320  219832 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1019 17:15:58.866352  219832 logs.go:123] Gathering logs for kube-scheduler [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f] ...
	I1019 17:15:58.866369  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:15:58.944419  219832 logs.go:123] Gathering logs for kube-controller-manager [44a80a801bf4fae14b89098891aceb014a4308a5880e6bb47d1db7c41321104d] ...
	I1019 17:15:58.944459  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 44a80a801bf4fae14b89098891aceb014a4308a5880e6bb47d1db7c41321104d"
	I1019 17:15:58.997705  219832 logs.go:123] Gathering logs for CRI-O ...
	I1019 17:15:58.997777  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1019 17:15:59.084149  219832 logs.go:123] Gathering logs for container status ...
	I1019 17:15:59.084191  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1019 17:16:01.635141  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:16:01.635694  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1019 17:16:01.635751  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1019 17:16:01.635818  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1019 17:16:01.666235  219832 cri.go:89] found id: "55181ab12a75f0828e8a499b545960e5dfe82c07024572a8d9b4566b0da1fba4"
	I1019 17:16:01.666260  219832 cri.go:89] found id: ""
	I1019 17:16:01.666269  219832 logs.go:282] 1 containers: [55181ab12a75f0828e8a499b545960e5dfe82c07024572a8d9b4566b0da1fba4]
	I1019 17:16:01.666333  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:16:01.670421  219832 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1019 17:16:01.670497  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1019 17:16:01.701922  219832 cri.go:89] found id: ""
	I1019 17:16:01.701954  219832 logs.go:282] 0 containers: []
	W1019 17:16:01.701972  219832 logs.go:284] No container was found matching "etcd"
	I1019 17:16:01.701979  219832 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1019 17:16:01.702037  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1019 17:16:01.730807  219832 cri.go:89] found id: ""
	I1019 17:16:01.730832  219832 logs.go:282] 0 containers: []
	W1019 17:16:01.730842  219832 logs.go:284] No container was found matching "coredns"
	I1019 17:16:01.730849  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1019 17:16:01.730913  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1019 17:16:01.759751  219832 cri.go:89] found id: "409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:16:01.759777  219832 cri.go:89] found id: ""
	I1019 17:16:01.759785  219832 logs.go:282] 1 containers: [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f]
	I1019 17:16:01.759843  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:16:01.763917  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1019 17:16:01.763990  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1019 17:16:01.791456  219832 cri.go:89] found id: ""
	I1019 17:16:01.791487  219832 logs.go:282] 0 containers: []
	W1019 17:16:01.791498  219832 logs.go:284] No container was found matching "kube-proxy"
	I1019 17:16:01.791507  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1019 17:16:01.791566  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1019 17:16:01.823314  219832 cri.go:89] found id: "44a80a801bf4fae14b89098891aceb014a4308a5880e6bb47d1db7c41321104d"
	I1019 17:16:01.823336  219832 cri.go:89] found id: "fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa"
	I1019 17:16:01.823340  219832 cri.go:89] found id: ""
	I1019 17:16:01.823347  219832 logs.go:282] 2 containers: [44a80a801bf4fae14b89098891aceb014a4308a5880e6bb47d1db7c41321104d fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa]
	I1019 17:16:01.823393  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:16:01.827617  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:16:01.831523  219832 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1019 17:16:01.831589  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1019 17:16:01.860369  219832 cri.go:89] found id: ""
	I1019 17:16:01.860392  219832 logs.go:282] 0 containers: []
	W1019 17:16:01.860400  219832 logs.go:284] No container was found matching "kindnet"
	I1019 17:16:01.860405  219832 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1019 17:16:01.860449  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1019 17:16:01.889639  219832 cri.go:89] found id: ""
	I1019 17:16:01.889666  219832 logs.go:282] 0 containers: []
	W1019 17:16:01.889677  219832 logs.go:284] No container was found matching "storage-provisioner"
	I1019 17:16:01.889693  219832 logs.go:123] Gathering logs for dmesg ...
	I1019 17:16:01.889706  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1019 17:16:01.906946  219832 logs.go:123] Gathering logs for describe nodes ...
	I1019 17:16:01.906978  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1019 17:16:01.966849  219832 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1019 17:16:01.966874  219832 logs.go:123] Gathering logs for kube-apiserver [55181ab12a75f0828e8a499b545960e5dfe82c07024572a8d9b4566b0da1fba4] ...
	I1019 17:16:01.966891  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 55181ab12a75f0828e8a499b545960e5dfe82c07024572a8d9b4566b0da1fba4"
	I1019 17:16:02.002930  219832 logs.go:123] Gathering logs for kube-controller-manager [fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa] ...
	I1019 17:16:02.002968  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa"
	I1019 17:16:02.030485  219832 logs.go:123] Gathering logs for CRI-O ...
	I1019 17:16:02.030518  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1019 17:16:02.082074  219832 logs.go:123] Gathering logs for kubelet ...
	I1019 17:16:02.082110  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1019 17:16:02.183474  219832 logs.go:123] Gathering logs for kube-scheduler [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f] ...
	I1019 17:16:02.183506  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:16:02.245100  219832 logs.go:123] Gathering logs for kube-controller-manager [44a80a801bf4fae14b89098891aceb014a4308a5880e6bb47d1db7c41321104d] ...
	I1019 17:16:02.245143  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 44a80a801bf4fae14b89098891aceb014a4308a5880e6bb47d1db7c41321104d"
	I1019 17:16:02.274811  219832 logs.go:123] Gathering logs for container status ...
	I1019 17:16:02.274842  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1019 17:15:59.548160  262636 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1019 17:15:59.548357  262636 start.go:159] libmachine.API.Create for "newest-cni-848035" (driver="docker")
	I1019 17:15:59.548390  262636 client.go:171] LocalClient.Create starting
	I1019 17:15:59.548488  262636 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem
	I1019 17:15:59.548521  262636 main.go:143] libmachine: Decoding PEM data...
	I1019 17:15:59.548535  262636 main.go:143] libmachine: Parsing certificate...
	I1019 17:15:59.548585  262636 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-3731/.minikube/certs/cert.pem
	I1019 17:15:59.548607  262636 main.go:143] libmachine: Decoding PEM data...
	I1019 17:15:59.548617  262636 main.go:143] libmachine: Parsing certificate...
	I1019 17:15:59.548992  262636 cli_runner.go:164] Run: docker network inspect newest-cni-848035 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1019 17:15:59.567144  262636 cli_runner.go:211] docker network inspect newest-cni-848035 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1019 17:15:59.567225  262636 network_create.go:284] running [docker network inspect newest-cni-848035] to gather additional debugging logs...
	I1019 17:15:59.567246  262636 cli_runner.go:164] Run: docker network inspect newest-cni-848035
	W1019 17:15:59.587004  262636 cli_runner.go:211] docker network inspect newest-cni-848035 returned with exit code 1
	I1019 17:15:59.587042  262636 network_create.go:287] error running [docker network inspect newest-cni-848035]: docker network inspect newest-cni-848035: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-848035 not found
	I1019 17:15:59.587061  262636 network_create.go:289] output of [docker network inspect newest-cni-848035]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-848035 not found
	
	** /stderr **
	I1019 17:15:59.587186  262636 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 17:15:59.606238  262636 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-96cf7041f267 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:7a:ea:91:e3:37:25} reservation:<nil>}
	I1019 17:15:59.606955  262636 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-0f2c415cfca9 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:96:f0:8a:e9:5f:de} reservation:<nil>}
	I1019 17:15:59.607693  262636 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ca739aebb768 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:a6:81:0d:b3:5e:ec} reservation:<nil>}
	I1019 17:15:59.608518  262636 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ecea20}
	I1019 17:15:59.608544  262636 network_create.go:124] attempt to create docker network newest-cni-848035 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1019 17:15:59.608611  262636 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-848035 newest-cni-848035
	I1019 17:15:59.672766  262636 network_create.go:108] docker network newest-cni-848035 192.168.76.0/24 created
	I1019 17:15:59.672796  262636 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-848035" container
	I1019 17:15:59.672849  262636 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1019 17:15:59.691058  262636 cli_runner.go:164] Run: docker volume create newest-cni-848035 --label name.minikube.sigs.k8s.io=newest-cni-848035 --label created_by.minikube.sigs.k8s.io=true
	I1019 17:15:59.709827  262636 oci.go:103] Successfully created a docker volume newest-cni-848035
	I1019 17:15:59.709911  262636 cli_runner.go:164] Run: docker run --rm --name newest-cni-848035-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-848035 --entrypoint /usr/bin/test -v newest-cni-848035:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1019 17:16:00.096407  262636 oci.go:107] Successfully prepared a docker volume newest-cni-848035
	I1019 17:16:00.096464  262636 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:16:00.096485  262636 kic.go:194] Starting extracting preloaded images to volume ...
	I1019 17:16:00.096558  262636 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-3731/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-848035:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	W1019 17:16:03.589472  256207 node_ready.go:57] node "default-k8s-diff-port-663015" has "Ready":"False" status (will retry)
	W1019 17:16:05.984378  256207 node_ready.go:57] node "default-k8s-diff-port-663015" has "Ready":"False" status (will retry)
	I1019 17:16:04.811142  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:16:04.811568  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1019 17:16:04.811625  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1019 17:16:04.811681  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1019 17:16:04.846377  219832 cri.go:89] found id: "55181ab12a75f0828e8a499b545960e5dfe82c07024572a8d9b4566b0da1fba4"
	I1019 17:16:04.846473  219832 cri.go:89] found id: ""
	I1019 17:16:04.846502  219832 logs.go:282] 1 containers: [55181ab12a75f0828e8a499b545960e5dfe82c07024572a8d9b4566b0da1fba4]
	I1019 17:16:04.846553  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:16:04.851418  219832 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1019 17:16:04.851489  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1019 17:16:04.884520  219832 cri.go:89] found id: ""
	I1019 17:16:04.884549  219832 logs.go:282] 0 containers: []
	W1019 17:16:04.884560  219832 logs.go:284] No container was found matching "etcd"
	I1019 17:16:04.884568  219832 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1019 17:16:04.884639  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1019 17:16:04.916586  219832 cri.go:89] found id: ""
	I1019 17:16:04.916635  219832 logs.go:282] 0 containers: []
	W1019 17:16:04.916647  219832 logs.go:284] No container was found matching "coredns"
	I1019 17:16:04.916655  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1019 17:16:04.916727  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1019 17:16:04.950383  219832 cri.go:89] found id: "409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:16:04.950403  219832 cri.go:89] found id: ""
	I1019 17:16:04.950411  219832 logs.go:282] 1 containers: [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f]
	I1019 17:16:04.950461  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:16:04.955018  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1019 17:16:04.955091  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1019 17:16:04.989244  219832 cri.go:89] found id: ""
	I1019 17:16:04.989272  219832 logs.go:282] 0 containers: []
	W1019 17:16:04.989283  219832 logs.go:284] No container was found matching "kube-proxy"
	I1019 17:16:04.989289  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1019 17:16:04.989349  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1019 17:16:05.028844  219832 cri.go:89] found id: "44a80a801bf4fae14b89098891aceb014a4308a5880e6bb47d1db7c41321104d"
	I1019 17:16:05.028871  219832 cri.go:89] found id: "fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa"
	I1019 17:16:05.028876  219832 cri.go:89] found id: ""
	I1019 17:16:05.028886  219832 logs.go:282] 2 containers: [44a80a801bf4fae14b89098891aceb014a4308a5880e6bb47d1db7c41321104d fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa]
	I1019 17:16:05.028987  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:16:05.033370  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:16:05.038053  219832 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1019 17:16:05.038130  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1019 17:16:05.069157  219832 cri.go:89] found id: ""
	I1019 17:16:05.069186  219832 logs.go:282] 0 containers: []
	W1019 17:16:05.069196  219832 logs.go:284] No container was found matching "kindnet"
	I1019 17:16:05.069204  219832 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1019 17:16:05.069259  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1019 17:16:05.108890  219832 cri.go:89] found id: ""
	I1019 17:16:05.108917  219832 logs.go:282] 0 containers: []
	W1019 17:16:05.108928  219832 logs.go:284] No container was found matching "storage-provisioner"
	I1019 17:16:05.108944  219832 logs.go:123] Gathering logs for kubelet ...
	I1019 17:16:05.108957  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1019 17:16:05.238324  219832 logs.go:123] Gathering logs for dmesg ...
	I1019 17:16:05.238360  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1019 17:16:05.256500  219832 logs.go:123] Gathering logs for describe nodes ...
	I1019 17:16:05.256525  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1019 17:16:05.320777  219832 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1019 17:16:05.320799  219832 logs.go:123] Gathering logs for kube-apiserver [55181ab12a75f0828e8a499b545960e5dfe82c07024572a8d9b4566b0da1fba4] ...
	I1019 17:16:05.320814  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 55181ab12a75f0828e8a499b545960e5dfe82c07024572a8d9b4566b0da1fba4"
	I1019 17:16:05.357387  219832 logs.go:123] Gathering logs for kube-controller-manager [fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa] ...
	I1019 17:16:05.357423  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcfbf068f0e9115a87dd7a29269c83dba9b140dc195fcd7468e2d9b339de7caa"
	I1019 17:16:05.385888  219832 logs.go:123] Gathering logs for container status ...
	I1019 17:16:05.385922  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1019 17:16:05.418583  219832 logs.go:123] Gathering logs for kube-scheduler [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f] ...
	I1019 17:16:05.418615  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:16:05.485820  219832 logs.go:123] Gathering logs for kube-controller-manager [44a80a801bf4fae14b89098891aceb014a4308a5880e6bb47d1db7c41321104d] ...
	I1019 17:16:05.485852  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 44a80a801bf4fae14b89098891aceb014a4308a5880e6bb47d1db7c41321104d"
	I1019 17:16:05.518208  219832 logs.go:123] Gathering logs for CRI-O ...
	I1019 17:16:05.518231  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1019 17:16:08.073169  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:16:08.073590  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1019 17:16:08.073641  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1019 17:16:08.073688  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1019 17:16:08.101564  219832 cri.go:89] found id: "55181ab12a75f0828e8a499b545960e5dfe82c07024572a8d9b4566b0da1fba4"
	I1019 17:16:08.101591  219832 cri.go:89] found id: ""
	I1019 17:16:08.101601  219832 logs.go:282] 1 containers: [55181ab12a75f0828e8a499b545960e5dfe82c07024572a8d9b4566b0da1fba4]
	I1019 17:16:08.101652  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:16:08.105929  219832 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1019 17:16:08.105986  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1019 17:16:08.132459  219832 cri.go:89] found id: ""
	I1019 17:16:08.132488  219832 logs.go:282] 0 containers: []
	W1019 17:16:08.132500  219832 logs.go:284] No container was found matching "etcd"
	I1019 17:16:08.132508  219832 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1019 17:16:08.132556  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1019 17:16:08.159254  219832 cri.go:89] found id: ""
	I1019 17:16:08.159280  219832 logs.go:282] 0 containers: []
	W1019 17:16:08.159290  219832 logs.go:284] No container was found matching "coredns"
	I1019 17:16:08.159296  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1019 17:16:08.159339  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1019 17:16:08.186344  219832 cri.go:89] found id: "409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:16:08.186370  219832 cri.go:89] found id: ""
	I1019 17:16:08.186379  219832 logs.go:282] 1 containers: [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f]
	I1019 17:16:08.186440  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:16:08.190564  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1019 17:16:08.190632  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1019 17:16:08.217788  219832 cri.go:89] found id: ""
	I1019 17:16:08.217813  219832 logs.go:282] 0 containers: []
	W1019 17:16:08.217824  219832 logs.go:284] No container was found matching "kube-proxy"
	I1019 17:16:08.217830  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1019 17:16:08.217887  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1019 17:16:08.245733  219832 cri.go:89] found id: "44a80a801bf4fae14b89098891aceb014a4308a5880e6bb47d1db7c41321104d"
	I1019 17:16:08.245759  219832 cri.go:89] found id: ""
	I1019 17:16:08.245768  219832 logs.go:282] 1 containers: [44a80a801bf4fae14b89098891aceb014a4308a5880e6bb47d1db7c41321104d]
	I1019 17:16:08.245839  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:16:08.250271  219832 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1019 17:16:08.250337  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1019 17:16:08.276936  219832 cri.go:89] found id: ""
	I1019 17:16:08.276959  219832 logs.go:282] 0 containers: []
	W1019 17:16:08.276983  219832 logs.go:284] No container was found matching "kindnet"
	I1019 17:16:08.276990  219832 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1019 17:16:08.277044  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1019 17:16:08.304972  219832 cri.go:89] found id: ""
	I1019 17:16:08.305001  219832 logs.go:282] 0 containers: []
	W1019 17:16:08.305013  219832 logs.go:284] No container was found matching "storage-provisioner"
	I1019 17:16:08.305023  219832 logs.go:123] Gathering logs for kubelet ...
	I1019 17:16:08.305039  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1019 17:16:08.399082  219832 logs.go:123] Gathering logs for dmesg ...
	I1019 17:16:08.399120  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1019 17:16:08.414191  219832 logs.go:123] Gathering logs for describe nodes ...
	I1019 17:16:08.414220  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1019 17:16:08.471427  219832 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1019 17:16:08.471452  219832 logs.go:123] Gathering logs for kube-apiserver [55181ab12a75f0828e8a499b545960e5dfe82c07024572a8d9b4566b0da1fba4] ...
	I1019 17:16:08.471464  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 55181ab12a75f0828e8a499b545960e5dfe82c07024572a8d9b4566b0da1fba4"
	I1019 17:16:08.507300  219832 logs.go:123] Gathering logs for kube-scheduler [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f] ...
	I1019 17:16:08.507328  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:16:04.535607  262636 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-3731/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-848035:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.439013693s)
	I1019 17:16:04.535643  262636 kic.go:203] duration metric: took 4.439154011s to extract preloaded images to volume ...
	W1019 17:16:04.535714  262636 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1019 17:16:04.535752  262636 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1019 17:16:04.535791  262636 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1019 17:16:04.596255  262636 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-848035 --name newest-cni-848035 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-848035 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-848035 --network newest-cni-848035 --ip 192.168.76.2 --volume newest-cni-848035:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1019 17:16:04.921050  262636 cli_runner.go:164] Run: docker container inspect newest-cni-848035 --format={{.State.Running}}
	I1019 17:16:04.943093  262636 cli_runner.go:164] Run: docker container inspect newest-cni-848035 --format={{.State.Status}}
	I1019 17:16:04.964173  262636 cli_runner.go:164] Run: docker exec newest-cni-848035 stat /var/lib/dpkg/alternatives/iptables
	I1019 17:16:05.020366  262636 oci.go:144] the created container "newest-cni-848035" has a running status.
	I1019 17:16:05.020403  262636 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-3731/.minikube/machines/newest-cni-848035/id_rsa...
	I1019 17:16:05.167266  262636 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-3731/.minikube/machines/newest-cni-848035/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1019 17:16:05.203530  262636 cli_runner.go:164] Run: docker container inspect newest-cni-848035 --format={{.State.Status}}
	I1019 17:16:05.226144  262636 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1019 17:16:05.226168  262636 kic_runner.go:114] Args: [docker exec --privileged newest-cni-848035 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1019 17:16:05.279893  262636 cli_runner.go:164] Run: docker container inspect newest-cni-848035 --format={{.State.Status}}
	I1019 17:16:05.299683  262636 machine.go:94] provisionDockerMachine start ...
	I1019 17:16:05.299806  262636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-848035
	I1019 17:16:05.322827  262636 main.go:143] libmachine: Using SSH client type: native
	I1019 17:16:05.323217  262636 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33084 <nil> <nil>}
	I1019 17:16:05.323241  262636 main.go:143] libmachine: About to run SSH command:
	hostname
	I1019 17:16:05.323924  262636 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56660->127.0.0.1:33084: read: connection reset by peer
	I1019 17:16:08.459339  262636 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-848035
	
	I1019 17:16:08.459367  262636 ubuntu.go:182] provisioning hostname "newest-cni-848035"
	I1019 17:16:08.459433  262636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-848035
	I1019 17:16:08.478965  262636 main.go:143] libmachine: Using SSH client type: native
	I1019 17:16:08.479314  262636 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33084 <nil> <nil>}
	I1019 17:16:08.479339  262636 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-848035 && echo "newest-cni-848035" | sudo tee /etc/hostname
	I1019 17:16:08.629931  262636 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-848035
	
	I1019 17:16:08.630023  262636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-848035
	I1019 17:16:08.650042  262636 main.go:143] libmachine: Using SSH client type: native
	I1019 17:16:08.650310  262636 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33084 <nil> <nil>}
	I1019 17:16:08.650332  262636 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-848035' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-848035/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-848035' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 17:16:08.787035  262636 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1019 17:16:08.787081  262636 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-3731/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-3731/.minikube}
	I1019 17:16:08.787125  262636 ubuntu.go:190] setting up certificates
	I1019 17:16:08.787134  262636 provision.go:84] configureAuth start
	I1019 17:16:08.787196  262636 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-848035
	I1019 17:16:08.805737  262636 provision.go:143] copyHostCerts
	I1019 17:16:08.805796  262636 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-3731/.minikube/ca.pem, removing ...
	I1019 17:16:08.805804  262636 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-3731/.minikube/ca.pem
	I1019 17:16:08.805867  262636 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-3731/.minikube/ca.pem (1082 bytes)
	I1019 17:16:08.805959  262636 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-3731/.minikube/cert.pem, removing ...
	I1019 17:16:08.805968  262636 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-3731/.minikube/cert.pem
	I1019 17:16:08.805996  262636 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-3731/.minikube/cert.pem (1123 bytes)
	I1019 17:16:08.806054  262636 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-3731/.minikube/key.pem, removing ...
	I1019 17:16:08.806062  262636 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-3731/.minikube/key.pem
	I1019 17:16:08.806113  262636 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-3731/.minikube/key.pem (1679 bytes)
	I1019 17:16:08.806175  262636 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-3731/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca-key.pem org=jenkins.newest-cni-848035 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-848035]
	I1019 17:16:09.216628  262636 provision.go:177] copyRemoteCerts
	I1019 17:16:09.216689  262636 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 17:16:09.216722  262636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-848035
	I1019 17:16:09.235961  262636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/newest-cni-848035/id_rsa Username:docker}
	I1019 17:16:09.334719  262636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 17:16:09.354393  262636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1019 17:16:09.372380  262636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1019 17:16:09.390387  262636 provision.go:87] duration metric: took 603.241065ms to configureAuth
	I1019 17:16:09.390413  262636 ubuntu.go:206] setting minikube options for container-runtime
	I1019 17:16:09.390579  262636 config.go:182] Loaded profile config "newest-cni-848035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:16:09.390677  262636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-848035
	I1019 17:16:09.410271  262636 main.go:143] libmachine: Using SSH client type: native
	I1019 17:16:09.410481  262636 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33084 <nil> <nil>}
	I1019 17:16:09.410501  262636 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 17:16:09.657600  262636 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 17:16:09.657630  262636 machine.go:97] duration metric: took 4.357916043s to provisionDockerMachine
	I1019 17:16:09.657642  262636 client.go:174] duration metric: took 10.109241662s to LocalClient.Create
	I1019 17:16:09.657660  262636 start.go:167] duration metric: took 10.109303363s to libmachine.API.Create "newest-cni-848035"
	I1019 17:16:09.657668  262636 start.go:293] postStartSetup for "newest-cni-848035" (driver="docker")
	I1019 17:16:09.657681  262636 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 17:16:09.657758  262636 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 17:16:09.657817  262636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-848035
	I1019 17:16:09.676472  262636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/newest-cni-848035/id_rsa Username:docker}
	I1019 17:16:09.775531  262636 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 17:16:09.779176  262636 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 17:16:09.779207  262636 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 17:16:09.779220  262636 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-3731/.minikube/addons for local assets ...
	I1019 17:16:09.779280  262636 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-3731/.minikube/files for local assets ...
	I1019 17:16:09.779369  262636 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-3731/.minikube/files/etc/ssl/certs/72282.pem -> 72282.pem in /etc/ssl/certs
	I1019 17:16:09.779477  262636 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 17:16:09.787510  262636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/files/etc/ssl/certs/72282.pem --> /etc/ssl/certs/72282.pem (1708 bytes)
	I1019 17:16:09.807931  262636 start.go:296] duration metric: took 150.247132ms for postStartSetup
	I1019 17:16:09.808371  262636 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-848035
	I1019 17:16:09.826364  262636 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/newest-cni-848035/config.json ...
	I1019 17:16:09.826619  262636 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 17:16:09.826659  262636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-848035
	I1019 17:16:09.845174  262636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/newest-cni-848035/id_rsa Username:docker}
	I1019 17:16:09.945393  262636 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 17:16:09.950473  262636 start.go:128] duration metric: took 10.405193751s to createHost
	I1019 17:16:09.950504  262636 start.go:83] releasing machines lock for "newest-cni-848035", held for 10.405331555s
	I1019 17:16:09.950570  262636 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-848035
	I1019 17:16:09.968807  262636 ssh_runner.go:195] Run: cat /version.json
	I1019 17:16:09.968859  262636 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 17:16:09.968866  262636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-848035
	I1019 17:16:09.968906  262636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-848035
	I1019 17:16:09.989053  262636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/newest-cni-848035/id_rsa Username:docker}
	I1019 17:16:09.989590  262636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/newest-cni-848035/id_rsa Username:docker}
	I1019 17:16:10.159110  262636 ssh_runner.go:195] Run: systemctl --version
	I1019 17:16:10.165977  262636 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 17:16:10.201362  262636 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 17:16:10.206225  262636 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 17:16:10.206276  262636 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 17:16:10.233058  262636 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1019 17:16:10.233119  262636 start.go:496] detecting cgroup driver to use...
	I1019 17:16:10.233150  262636 detect.go:190] detected "systemd" cgroup driver on host os
	I1019 17:16:10.233205  262636 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 17:16:10.249453  262636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 17:16:10.263544  262636 docker.go:218] disabling cri-docker service (if available) ...
	I1019 17:16:10.263608  262636 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 17:16:10.283285  262636 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 17:16:10.306896  262636 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 17:16:10.389962  262636 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 17:16:10.478217  262636 docker.go:234] disabling docker service ...
	I1019 17:16:10.478291  262636 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 17:16:10.497181  262636 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 17:16:10.510607  262636 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 17:16:10.594733  262636 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 17:16:10.686218  262636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 17:16:10.699145  262636 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 17:16:10.713894  262636 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 17:16:10.713995  262636 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:16:10.725299  262636 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1019 17:16:10.725365  262636 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:16:10.735163  262636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:16:10.744304  262636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:16:10.753537  262636 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 17:16:10.761868  262636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:16:10.771094  262636 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:16:10.785510  262636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:16:10.794576  262636 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 17:16:10.802438  262636 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 17:16:10.810020  262636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:16:10.885516  262636 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 17:16:10.992768  262636 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 17:16:10.992832  262636 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 17:16:10.996893  262636 start.go:564] Will wait 60s for crictl version
	I1019 17:16:10.996944  262636 ssh_runner.go:195] Run: which crictl
	I1019 17:16:11.000868  262636 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 17:16:11.024827  262636 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 17:16:11.024897  262636 ssh_runner.go:195] Run: crio --version
	I1019 17:16:11.054356  262636 ssh_runner.go:195] Run: crio --version
	I1019 17:16:11.085472  262636 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1019 17:16:11.086967  262636 cli_runner.go:164] Run: docker network inspect newest-cni-848035 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 17:16:11.105229  262636 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1019 17:16:11.109814  262636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:16:11.122734  262636 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1019 17:16:07.985128  256207 node_ready.go:57] node "default-k8s-diff-port-663015" has "Ready":"False" status (will retry)
	I1019 17:16:09.984659  256207 node_ready.go:49] node "default-k8s-diff-port-663015" is "Ready"
	I1019 17:16:09.984687  256207 node_ready.go:38] duration metric: took 10.503508622s for node "default-k8s-diff-port-663015" to be "Ready" ...
	I1019 17:16:09.984700  256207 api_server.go:52] waiting for apiserver process to appear ...
	I1019 17:16:09.984750  256207 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 17:16:09.997876  256207 api_server.go:72] duration metric: took 11.091275192s to wait for apiserver process to appear ...
	I1019 17:16:09.997901  256207 api_server.go:88] waiting for apiserver healthz status ...
	I1019 17:16:09.997917  256207 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1019 17:16:10.005960  256207 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1019 17:16:10.006969  256207 api_server.go:141] control plane version: v1.34.1
	I1019 17:16:10.006992  256207 api_server.go:131] duration metric: took 9.084731ms to wait for apiserver health ...
	I1019 17:16:10.007000  256207 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 17:16:10.010174  256207 system_pods.go:59] 8 kube-system pods found
	I1019 17:16:10.010210  256207 system_pods.go:61] "coredns-66bc5c9577-2r8tf" [ad80fc3e-0eba-4bcd-a0bb-5b0ffbcc9d70] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:16:10.010219  256207 system_pods.go:61] "etcd-default-k8s-diff-port-663015" [18e1ec40-2b88-421c-9e44-970a2018c003] Running
	I1019 17:16:10.010235  256207 system_pods.go:61] "kindnet-rrthg" [4d236960-9ec3-445d-98fd-d04af4ea465f] Running
	I1019 17:16:10.010240  256207 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-663015" [d8c73da8-cb33-4f4c-b938-4bac98577a62] Running
	I1019 17:16:10.010246  256207 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-663015" [2448d29d-047f-4fe2-81e8-11126e65ddc6] Running
	I1019 17:16:10.010254  256207 system_pods.go:61] "kube-proxy-g62dn" [0096be5f-f9a5-4aab-a41f-67004f646d90] Running
	I1019 17:16:10.010259  256207 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-663015" [709c1526-869e-45c5-a052-4a55a0254038] Running
	I1019 17:16:10.010270  256207 system_pods.go:61] "storage-provisioner" [c394d5d6-2e4b-4d29-8a5d-cdf33dcbba74] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 17:16:10.010283  256207 system_pods.go:74] duration metric: took 3.275831ms to wait for pod list to return data ...
	I1019 17:16:10.010298  256207 default_sa.go:34] waiting for default service account to be created ...
	I1019 17:16:10.012588  256207 default_sa.go:45] found service account: "default"
	I1019 17:16:10.012611  256207 default_sa.go:55] duration metric: took 2.306821ms for default service account to be created ...
	I1019 17:16:10.012621  256207 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 17:16:10.015040  256207 system_pods.go:86] 8 kube-system pods found
	I1019 17:16:10.015095  256207 system_pods.go:89] "coredns-66bc5c9577-2r8tf" [ad80fc3e-0eba-4bcd-a0bb-5b0ffbcc9d70] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:16:10.015103  256207 system_pods.go:89] "etcd-default-k8s-diff-port-663015" [18e1ec40-2b88-421c-9e44-970a2018c003] Running
	I1019 17:16:10.015111  256207 system_pods.go:89] "kindnet-rrthg" [4d236960-9ec3-445d-98fd-d04af4ea465f] Running
	I1019 17:16:10.015117  256207 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-663015" [d8c73da8-cb33-4f4c-b938-4bac98577a62] Running
	I1019 17:16:10.015122  256207 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-663015" [2448d29d-047f-4fe2-81e8-11126e65ddc6] Running
	I1019 17:16:10.015127  256207 system_pods.go:89] "kube-proxy-g62dn" [0096be5f-f9a5-4aab-a41f-67004f646d90] Running
	I1019 17:16:10.015137  256207 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-663015" [709c1526-869e-45c5-a052-4a55a0254038] Running
	I1019 17:16:10.015143  256207 system_pods.go:89] "storage-provisioner" [c394d5d6-2e4b-4d29-8a5d-cdf33dcbba74] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 17:16:10.015169  256207 retry.go:31] will retry after 258.663037ms: missing components: kube-dns
	I1019 17:16:10.277556  256207 system_pods.go:86] 8 kube-system pods found
	I1019 17:16:10.277595  256207 system_pods.go:89] "coredns-66bc5c9577-2r8tf" [ad80fc3e-0eba-4bcd-a0bb-5b0ffbcc9d70] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:16:10.277604  256207 system_pods.go:89] "etcd-default-k8s-diff-port-663015" [18e1ec40-2b88-421c-9e44-970a2018c003] Running
	I1019 17:16:10.277615  256207 system_pods.go:89] "kindnet-rrthg" [4d236960-9ec3-445d-98fd-d04af4ea465f] Running
	I1019 17:16:10.277621  256207 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-663015" [d8c73da8-cb33-4f4c-b938-4bac98577a62] Running
	I1019 17:16:10.277627  256207 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-663015" [2448d29d-047f-4fe2-81e8-11126e65ddc6] Running
	I1019 17:16:10.277634  256207 system_pods.go:89] "kube-proxy-g62dn" [0096be5f-f9a5-4aab-a41f-67004f646d90] Running
	I1019 17:16:10.277639  256207 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-663015" [709c1526-869e-45c5-a052-4a55a0254038] Running
	I1019 17:16:10.277644  256207 system_pods.go:89] "storage-provisioner" [c394d5d6-2e4b-4d29-8a5d-cdf33dcbba74] Running
	I1019 17:16:10.277661  256207 retry.go:31] will retry after 324.809456ms: missing components: kube-dns
	I1019 17:16:10.607249  256207 system_pods.go:86] 8 kube-system pods found
	I1019 17:16:10.607284  256207 system_pods.go:89] "coredns-66bc5c9577-2r8tf" [ad80fc3e-0eba-4bcd-a0bb-5b0ffbcc9d70] Running
	I1019 17:16:10.607292  256207 system_pods.go:89] "etcd-default-k8s-diff-port-663015" [18e1ec40-2b88-421c-9e44-970a2018c003] Running
	I1019 17:16:10.607298  256207 system_pods.go:89] "kindnet-rrthg" [4d236960-9ec3-445d-98fd-d04af4ea465f] Running
	I1019 17:16:10.607306  256207 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-663015" [d8c73da8-cb33-4f4c-b938-4bac98577a62] Running
	I1019 17:16:10.607313  256207 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-663015" [2448d29d-047f-4fe2-81e8-11126e65ddc6] Running
	I1019 17:16:10.607319  256207 system_pods.go:89] "kube-proxy-g62dn" [0096be5f-f9a5-4aab-a41f-67004f646d90] Running
	I1019 17:16:10.607327  256207 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-663015" [709c1526-869e-45c5-a052-4a55a0254038] Running
	I1019 17:16:10.607346  256207 system_pods.go:89] "storage-provisioner" [c394d5d6-2e4b-4d29-8a5d-cdf33dcbba74] Running
	I1019 17:16:10.607360  256207 system_pods.go:126] duration metric: took 594.731443ms to wait for k8s-apps to be running ...
	I1019 17:16:10.607373  256207 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 17:16:10.607425  256207 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:16:10.621426  256207 system_svc.go:56] duration metric: took 14.043039ms WaitForService to wait for kubelet
	I1019 17:16:10.621463  256207 kubeadm.go:587] duration metric: took 11.714863872s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 17:16:10.621488  256207 node_conditions.go:102] verifying NodePressure condition ...
	I1019 17:16:10.624532  256207 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1019 17:16:10.624567  256207 node_conditions.go:123] node cpu capacity is 8
	I1019 17:16:10.624586  256207 node_conditions.go:105] duration metric: took 3.091665ms to run NodePressure ...
	I1019 17:16:10.624601  256207 start.go:242] waiting for startup goroutines ...
	I1019 17:16:10.624612  256207 start.go:247] waiting for cluster config update ...
	I1019 17:16:10.624633  256207 start.go:256] writing updated cluster config ...
	I1019 17:16:10.625011  256207 ssh_runner.go:195] Run: rm -f paused
	I1019 17:16:10.631451  256207 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 17:16:10.635919  256207 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-2r8tf" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:16:10.641562  256207 pod_ready.go:94] pod "coredns-66bc5c9577-2r8tf" is "Ready"
	I1019 17:16:10.641587  256207 pod_ready.go:86] duration metric: took 5.63746ms for pod "coredns-66bc5c9577-2r8tf" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:16:10.643572  256207 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-663015" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:16:10.647544  256207 pod_ready.go:94] pod "etcd-default-k8s-diff-port-663015" is "Ready"
	I1019 17:16:10.647567  256207 pod_ready.go:86] duration metric: took 3.974392ms for pod "etcd-default-k8s-diff-port-663015" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:16:10.649369  256207 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-663015" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:16:10.653158  256207 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-663015" is "Ready"
	I1019 17:16:10.653221  256207 pod_ready.go:86] duration metric: took 3.792612ms for pod "kube-apiserver-default-k8s-diff-port-663015" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:16:10.655250  256207 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-663015" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:16:11.035995  256207 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-663015" is "Ready"
	I1019 17:16:11.036027  256207 pod_ready.go:86] duration metric: took 380.750093ms for pod "kube-controller-manager-default-k8s-diff-port-663015" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:16:11.237012  256207 pod_ready.go:83] waiting for pod "kube-proxy-g62dn" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:16:11.636971  256207 pod_ready.go:94] pod "kube-proxy-g62dn" is "Ready"
	I1019 17:16:11.636999  256207 pod_ready.go:86] duration metric: took 399.962073ms for pod "kube-proxy-g62dn" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:16:11.836221  256207 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-663015" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:16:12.236318  256207 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-663015" is "Ready"
	I1019 17:16:12.236350  256207 pod_ready.go:86] duration metric: took 400.104305ms for pod "kube-scheduler-default-k8s-diff-port-663015" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:16:12.236367  256207 pod_ready.go:40] duration metric: took 1.604860787s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 17:16:12.285294  256207 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1019 17:16:12.287303  256207 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-663015" cluster and "default" namespace by default
	I1019 17:16:08.568957  219832 logs.go:123] Gathering logs for kube-controller-manager [44a80a801bf4fae14b89098891aceb014a4308a5880e6bb47d1db7c41321104d] ...
	I1019 17:16:08.568989  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 44a80a801bf4fae14b89098891aceb014a4308a5880e6bb47d1db7c41321104d"
	I1019 17:16:08.596050  219832 logs.go:123] Gathering logs for CRI-O ...
	I1019 17:16:08.596094  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1019 17:16:08.652131  219832 logs.go:123] Gathering logs for container status ...
	I1019 17:16:08.652160  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1019 17:16:11.185121  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:16:11.185524  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1019 17:16:11.185573  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1019 17:16:11.185621  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1019 17:16:11.214858  219832 cri.go:89] found id: "55181ab12a75f0828e8a499b545960e5dfe82c07024572a8d9b4566b0da1fba4"
	I1019 17:16:11.214884  219832 cri.go:89] found id: ""
	I1019 17:16:11.214894  219832 logs.go:282] 1 containers: [55181ab12a75f0828e8a499b545960e5dfe82c07024572a8d9b4566b0da1fba4]
	I1019 17:16:11.214954  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:16:11.219170  219832 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1019 17:16:11.219236  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1019 17:16:11.248559  219832 cri.go:89] found id: ""
	I1019 17:16:11.248584  219832 logs.go:282] 0 containers: []
	W1019 17:16:11.248596  219832 logs.go:284] No container was found matching "etcd"
	I1019 17:16:11.248603  219832 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1019 17:16:11.248653  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1019 17:16:11.277138  219832 cri.go:89] found id: ""
	I1019 17:16:11.277168  219832 logs.go:282] 0 containers: []
	W1019 17:16:11.277179  219832 logs.go:284] No container was found matching "coredns"
	I1019 17:16:11.277187  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1019 17:16:11.277243  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1019 17:16:11.306600  219832 cri.go:89] found id: "409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:16:11.306632  219832 cri.go:89] found id: ""
	I1019 17:16:11.306643  219832 logs.go:282] 1 containers: [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f]
	I1019 17:16:11.306712  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:16:11.310723  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1019 17:16:11.310786  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1019 17:16:11.338327  219832 cri.go:89] found id: ""
	I1019 17:16:11.338359  219832 logs.go:282] 0 containers: []
	W1019 17:16:11.338370  219832 logs.go:284] No container was found matching "kube-proxy"
	I1019 17:16:11.338377  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1019 17:16:11.338428  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1019 17:16:11.372583  219832 cri.go:89] found id: "44a80a801bf4fae14b89098891aceb014a4308a5880e6bb47d1db7c41321104d"
	I1019 17:16:11.372608  219832 cri.go:89] found id: ""
	I1019 17:16:11.372616  219832 logs.go:282] 1 containers: [44a80a801bf4fae14b89098891aceb014a4308a5880e6bb47d1db7c41321104d]
	I1019 17:16:11.372666  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:16:11.376784  219832 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1019 17:16:11.376841  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1019 17:16:11.404062  219832 cri.go:89] found id: ""
	I1019 17:16:11.404107  219832 logs.go:282] 0 containers: []
	W1019 17:16:11.404118  219832 logs.go:284] No container was found matching "kindnet"
	I1019 17:16:11.404125  219832 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1019 17:16:11.404182  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1019 17:16:11.432927  219832 cri.go:89] found id: ""
	I1019 17:16:11.432959  219832 logs.go:282] 0 containers: []
	W1019 17:16:11.432971  219832 logs.go:284] No container was found matching "storage-provisioner"
	I1019 17:16:11.432981  219832 logs.go:123] Gathering logs for describe nodes ...
	I1019 17:16:11.432996  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1019 17:16:11.499786  219832 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1019 17:16:11.499811  219832 logs.go:123] Gathering logs for kube-apiserver [55181ab12a75f0828e8a499b545960e5dfe82c07024572a8d9b4566b0da1fba4] ...
	I1019 17:16:11.499827  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 55181ab12a75f0828e8a499b545960e5dfe82c07024572a8d9b4566b0da1fba4"
	I1019 17:16:11.535819  219832 logs.go:123] Gathering logs for kube-scheduler [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f] ...
	I1019 17:16:11.535849  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:16:11.591106  219832 logs.go:123] Gathering logs for kube-controller-manager [44a80a801bf4fae14b89098891aceb014a4308a5880e6bb47d1db7c41321104d] ...
	I1019 17:16:11.591140  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 44a80a801bf4fae14b89098891aceb014a4308a5880e6bb47d1db7c41321104d"
	I1019 17:16:11.619557  219832 logs.go:123] Gathering logs for CRI-O ...
	I1019 17:16:11.619585  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1019 17:16:11.670604  219832 logs.go:123] Gathering logs for container status ...
	I1019 17:16:11.670638  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1019 17:16:11.702141  219832 logs.go:123] Gathering logs for kubelet ...
	I1019 17:16:11.702167  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1019 17:16:11.798012  219832 logs.go:123] Gathering logs for dmesg ...
	I1019 17:16:11.798041  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1019 17:16:11.124142  262636 kubeadm.go:884] updating cluster {Name:newest-cni-848035 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-848035 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 17:16:11.124267  262636 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:16:11.124327  262636 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:16:11.156971  262636 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:16:11.156994  262636 crio.go:433] Images already preloaded, skipping extraction
	I1019 17:16:11.157037  262636 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:16:11.182731  262636 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:16:11.182754  262636 cache_images.go:86] Images are preloaded, skipping loading
	I1019 17:16:11.182762  262636 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1019 17:16:11.182853  262636 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-848035 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-848035 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 17:16:11.182920  262636 ssh_runner.go:195] Run: crio config
	I1019 17:16:11.234554  262636 cni.go:84] Creating CNI manager for ""
	I1019 17:16:11.234574  262636 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:16:11.234594  262636 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1019 17:16:11.234625  262636 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-848035 NodeName:newest-cni-848035 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 17:16:11.234784  262636 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-848035"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 17:16:11.234856  262636 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 17:16:11.244823  262636 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 17:16:11.244896  262636 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 17:16:11.254470  262636 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1019 17:16:11.268971  262636 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 17:16:11.287774  262636 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1019 17:16:11.302777  262636 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1019 17:16:11.307051  262636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:16:11.317891  262636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:16:11.404484  262636 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:16:11.424729  262636 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/newest-cni-848035 for IP: 192.168.76.2
	I1019 17:16:11.424755  262636 certs.go:195] generating shared ca certs ...
	I1019 17:16:11.424776  262636 certs.go:227] acquiring lock for ca certs: {Name:mk4c4a2dfd94a54e3626e99fce4b6f5183eeaf4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:16:11.424942  262636 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-3731/.minikube/ca.key
	I1019 17:16:11.425016  262636 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-3731/.minikube/proxy-client-ca.key
	I1019 17:16:11.425031  262636 certs.go:257] generating profile certs ...
	I1019 17:16:11.425127  262636 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/newest-cni-848035/client.key
	I1019 17:16:11.425153  262636 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/newest-cni-848035/client.crt with IP's: []
	I1019 17:16:11.572534  262636 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/newest-cni-848035/client.crt ...
	I1019 17:16:11.572565  262636 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/newest-cni-848035/client.crt: {Name:mkf2bef887864c9fa63cef42359ef0c97d509344 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:16:11.572742  262636 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/newest-cni-848035/client.key ...
	I1019 17:16:11.572754  262636 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/newest-cni-848035/client.key: {Name:mkc88b0e24fda442b1a8452e5d43dd783f971064 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:16:11.572831  262636 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/newest-cni-848035/apiserver.key.facc7e69
	I1019 17:16:11.572854  262636 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/newest-cni-848035/apiserver.crt.facc7e69 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1019 17:16:11.659721  262636 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/newest-cni-848035/apiserver.crt.facc7e69 ...
	I1019 17:16:11.659747  262636 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/newest-cni-848035/apiserver.crt.facc7e69: {Name:mkc0eab47476e8968162ed4b6b4a0c43d36de327 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:16:11.659901  262636 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/newest-cni-848035/apiserver.key.facc7e69 ...
	I1019 17:16:11.659913  262636 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/newest-cni-848035/apiserver.key.facc7e69: {Name:mkc013657ab9cdb3a3a89a81efc2d608087ad314 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:16:11.659980  262636 certs.go:382] copying /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/newest-cni-848035/apiserver.crt.facc7e69 -> /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/newest-cni-848035/apiserver.crt
	I1019 17:16:11.660085  262636 certs.go:386] copying /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/newest-cni-848035/apiserver.key.facc7e69 -> /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/newest-cni-848035/apiserver.key
	I1019 17:16:11.660152  262636 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/newest-cni-848035/proxy-client.key
	I1019 17:16:11.660169  262636 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/newest-cni-848035/proxy-client.crt with IP's: []
	I1019 17:16:11.755259  262636 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/newest-cni-848035/proxy-client.crt ...
	I1019 17:16:11.755287  262636 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/newest-cni-848035/proxy-client.crt: {Name:mkfbd25f1f315027cf102125d68256d6857e23df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:16:11.755485  262636 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/newest-cni-848035/proxy-client.key ...
	I1019 17:16:11.755501  262636 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/newest-cni-848035/proxy-client.key: {Name:mk65ffbe1ba576940a2e6c063a8ffeacefbc08b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:16:11.755724  262636 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/7228.pem (1338 bytes)
	W1019 17:16:11.755764  262636 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-3731/.minikube/certs/7228_empty.pem, impossibly tiny 0 bytes
	I1019 17:16:11.755777  262636 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca-key.pem (1675 bytes)
	I1019 17:16:11.755798  262636 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem (1082 bytes)
	I1019 17:16:11.755816  262636 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/cert.pem (1123 bytes)
	I1019 17:16:11.755836  262636 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/key.pem (1679 bytes)
	I1019 17:16:11.755871  262636 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/files/etc/ssl/certs/72282.pem (1708 bytes)
	I1019 17:16:11.756526  262636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 17:16:11.775080  262636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1019 17:16:11.793306  262636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 17:16:11.811769  262636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1019 17:16:11.830011  262636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/newest-cni-848035/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1019 17:16:11.848533  262636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/newest-cni-848035/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1019 17:16:11.867099  262636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/newest-cni-848035/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 17:16:11.885526  262636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/newest-cni-848035/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1019 17:16:11.904105  262636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/files/etc/ssl/certs/72282.pem --> /usr/share/ca-certificates/72282.pem (1708 bytes)
	I1019 17:16:11.923786  262636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 17:16:11.943842  262636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/certs/7228.pem --> /usr/share/ca-certificates/7228.pem (1338 bytes)
	I1019 17:16:11.961599  262636 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 17:16:11.974502  262636 ssh_runner.go:195] Run: openssl version
	I1019 17:16:11.981028  262636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/72282.pem && ln -fs /usr/share/ca-certificates/72282.pem /etc/ssl/certs/72282.pem"
	I1019 17:16:11.989831  262636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/72282.pem
	I1019 17:16:11.993669  262636 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 16:26 /usr/share/ca-certificates/72282.pem
	I1019 17:16:11.993732  262636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/72282.pem
	I1019 17:16:12.029542  262636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/72282.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 17:16:12.039551  262636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 17:16:12.048700  262636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:16:12.052997  262636 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 16:21 /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:16:12.053078  262636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:16:12.088311  262636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 17:16:12.097813  262636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7228.pem && ln -fs /usr/share/ca-certificates/7228.pem /etc/ssl/certs/7228.pem"
	I1019 17:16:12.108421  262636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7228.pem
	I1019 17:16:12.112636  262636 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 16:26 /usr/share/ca-certificates/7228.pem
	I1019 17:16:12.112687  262636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7228.pem
	I1019 17:16:12.147899  262636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7228.pem /etc/ssl/certs/51391683.0"
	I1019 17:16:12.157202  262636 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 17:16:12.160978  262636 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1019 17:16:12.161035  262636 kubeadm.go:401] StartCluster: {Name:newest-cni-848035 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-848035 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:16:12.161135  262636 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 17:16:12.161188  262636 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 17:16:12.187795  262636 cri.go:89] found id: ""
	I1019 17:16:12.187850  262636 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 17:16:12.197188  262636 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1019 17:16:12.206360  262636 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1019 17:16:12.206419  262636 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1019 17:16:12.215302  262636 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1019 17:16:12.215324  262636 kubeadm.go:158] found existing configuration files:
	
	I1019 17:16:12.215386  262636 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1019 17:16:12.223786  262636 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1019 17:16:12.223841  262636 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1019 17:16:12.232039  262636 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1019 17:16:12.240421  262636 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1019 17:16:12.240477  262636 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1019 17:16:12.249316  262636 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1019 17:16:12.258489  262636 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1019 17:16:12.258572  262636 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1019 17:16:12.266514  262636 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1019 17:16:12.275662  262636 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1019 17:16:12.275718  262636 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1019 17:16:12.283799  262636 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1019 17:16:12.352374  262636 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1019 17:16:12.417655  262636 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1019 17:16:14.314074  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:16:14.314493  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1019 17:16:14.314551  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1019 17:16:14.314606  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1019 17:16:14.343235  219832 cri.go:89] found id: "55181ab12a75f0828e8a499b545960e5dfe82c07024572a8d9b4566b0da1fba4"
	I1019 17:16:14.343253  219832 cri.go:89] found id: ""
	I1019 17:16:14.343260  219832 logs.go:282] 1 containers: [55181ab12a75f0828e8a499b545960e5dfe82c07024572a8d9b4566b0da1fba4]
	I1019 17:16:14.343320  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:16:14.347426  219832 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1019 17:16:14.347490  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1019 17:16:14.376916  219832 cri.go:89] found id: ""
	I1019 17:16:14.376945  219832 logs.go:282] 0 containers: []
	W1019 17:16:14.376956  219832 logs.go:284] No container was found matching "etcd"
	I1019 17:16:14.376964  219832 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1019 17:16:14.377027  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1019 17:16:14.404091  219832 cri.go:89] found id: ""
	I1019 17:16:14.404125  219832 logs.go:282] 0 containers: []
	W1019 17:16:14.404139  219832 logs.go:284] No container was found matching "coredns"
	I1019 17:16:14.404151  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1019 17:16:14.404212  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1019 17:16:14.433591  219832 cri.go:89] found id: "409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:16:14.433612  219832 cri.go:89] found id: ""
	I1019 17:16:14.433620  219832 logs.go:282] 1 containers: [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f]
	I1019 17:16:14.433678  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:16:14.438200  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1019 17:16:14.438256  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1019 17:16:14.466834  219832 cri.go:89] found id: ""
	I1019 17:16:14.466863  219832 logs.go:282] 0 containers: []
	W1019 17:16:14.466873  219832 logs.go:284] No container was found matching "kube-proxy"
	I1019 17:16:14.466881  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1019 17:16:14.466941  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1019 17:16:14.499150  219832 cri.go:89] found id: "44a80a801bf4fae14b89098891aceb014a4308a5880e6bb47d1db7c41321104d"
	I1019 17:16:14.499170  219832 cri.go:89] found id: ""
	I1019 17:16:14.499180  219832 logs.go:282] 1 containers: [44a80a801bf4fae14b89098891aceb014a4308a5880e6bb47d1db7c41321104d]
	I1019 17:16:14.499237  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:16:14.503346  219832 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1019 17:16:14.503406  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1019 17:16:14.530973  219832 cri.go:89] found id: ""
	I1019 17:16:14.530998  219832 logs.go:282] 0 containers: []
	W1019 17:16:14.531051  219832 logs.go:284] No container was found matching "kindnet"
	I1019 17:16:14.531062  219832 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1019 17:16:14.531133  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1019 17:16:14.557951  219832 cri.go:89] found id: ""
	I1019 17:16:14.557980  219832 logs.go:282] 0 containers: []
	W1019 17:16:14.557991  219832 logs.go:284] No container was found matching "storage-provisioner"
	I1019 17:16:14.558000  219832 logs.go:123] Gathering logs for describe nodes ...
	I1019 17:16:14.558014  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1019 17:16:14.616363  219832 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1019 17:16:14.616385  219832 logs.go:123] Gathering logs for kube-apiserver [55181ab12a75f0828e8a499b545960e5dfe82c07024572a8d9b4566b0da1fba4] ...
	I1019 17:16:14.616399  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 55181ab12a75f0828e8a499b545960e5dfe82c07024572a8d9b4566b0da1fba4"
	I1019 17:16:14.649752  219832 logs.go:123] Gathering logs for kube-scheduler [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f] ...
	I1019 17:16:14.649781  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:16:14.705943  219832 logs.go:123] Gathering logs for kube-controller-manager [44a80a801bf4fae14b89098891aceb014a4308a5880e6bb47d1db7c41321104d] ...
	I1019 17:16:14.705980  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 44a80a801bf4fae14b89098891aceb014a4308a5880e6bb47d1db7c41321104d"
	I1019 17:16:14.734044  219832 logs.go:123] Gathering logs for CRI-O ...
	I1019 17:16:14.734096  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1019 17:16:14.783084  219832 logs.go:123] Gathering logs for container status ...
	I1019 17:16:14.783122  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1019 17:16:14.813855  219832 logs.go:123] Gathering logs for kubelet ...
	I1019 17:16:14.813880  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1019 17:16:14.906680  219832 logs.go:123] Gathering logs for dmesg ...
	I1019 17:16:14.906713  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1019 17:16:17.423129  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:16:17.423597  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1019 17:16:17.423658  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1019 17:16:17.423717  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1019 17:16:17.456995  219832 cri.go:89] found id: "55181ab12a75f0828e8a499b545960e5dfe82c07024572a8d9b4566b0da1fba4"
	I1019 17:16:17.457019  219832 cri.go:89] found id: ""
	I1019 17:16:17.457027  219832 logs.go:282] 1 containers: [55181ab12a75f0828e8a499b545960e5dfe82c07024572a8d9b4566b0da1fba4]
	I1019 17:16:17.457119  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:16:17.461700  219832 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1019 17:16:17.461760  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1019 17:16:17.491843  219832 cri.go:89] found id: ""
	I1019 17:16:17.491867  219832 logs.go:282] 0 containers: []
	W1019 17:16:17.491879  219832 logs.go:284] No container was found matching "etcd"
	I1019 17:16:17.491886  219832 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1019 17:16:17.491946  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1019 17:16:17.527108  219832 cri.go:89] found id: ""
	I1019 17:16:17.527137  219832 logs.go:282] 0 containers: []
	W1019 17:16:17.527149  219832 logs.go:284] No container was found matching "coredns"
	I1019 17:16:17.527156  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1019 17:16:17.527236  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1019 17:16:17.563103  219832 cri.go:89] found id: "409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:16:17.563128  219832 cri.go:89] found id: ""
	I1019 17:16:17.563135  219832 logs.go:282] 1 containers: [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f]
	I1019 17:16:17.563206  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:16:17.567230  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1019 17:16:17.567301  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1019 17:16:17.599902  219832 cri.go:89] found id: ""
	I1019 17:16:17.599928  219832 logs.go:282] 0 containers: []
	W1019 17:16:17.599945  219832 logs.go:284] No container was found matching "kube-proxy"
	I1019 17:16:17.599952  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1019 17:16:17.600012  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1019 17:16:17.632825  219832 cri.go:89] found id: "44a80a801bf4fae14b89098891aceb014a4308a5880e6bb47d1db7c41321104d"
	I1019 17:16:17.632849  219832 cri.go:89] found id: ""
	I1019 17:16:17.632859  219832 logs.go:282] 1 containers: [44a80a801bf4fae14b89098891aceb014a4308a5880e6bb47d1db7c41321104d]
	I1019 17:16:17.632915  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:16:17.637732  219832 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1019 17:16:17.637797  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1019 17:16:17.668192  219832 cri.go:89] found id: ""
	I1019 17:16:17.668223  219832 logs.go:282] 0 containers: []
	W1019 17:16:17.668233  219832 logs.go:284] No container was found matching "kindnet"
	I1019 17:16:17.668240  219832 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1019 17:16:17.668301  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1019 17:16:17.704555  219832 cri.go:89] found id: ""
	I1019 17:16:17.704580  219832 logs.go:282] 0 containers: []
	W1019 17:16:17.704590  219832 logs.go:284] No container was found matching "storage-provisioner"
	I1019 17:16:17.704600  219832 logs.go:123] Gathering logs for dmesg ...
	I1019 17:16:17.704613  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1019 17:16:17.722471  219832 logs.go:123] Gathering logs for describe nodes ...
	I1019 17:16:17.722508  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1019 17:16:17.796245  219832 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1019 17:16:17.796269  219832 logs.go:123] Gathering logs for kube-apiserver [55181ab12a75f0828e8a499b545960e5dfe82c07024572a8d9b4566b0da1fba4] ...
	I1019 17:16:17.796283  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 55181ab12a75f0828e8a499b545960e5dfe82c07024572a8d9b4566b0da1fba4"
	I1019 17:16:17.834256  219832 logs.go:123] Gathering logs for kube-scheduler [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f] ...
	I1019 17:16:17.834294  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:16:17.904767  219832 logs.go:123] Gathering logs for kube-controller-manager [44a80a801bf4fae14b89098891aceb014a4308a5880e6bb47d1db7c41321104d] ...
	I1019 17:16:17.904807  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 44a80a801bf4fae14b89098891aceb014a4308a5880e6bb47d1db7c41321104d"
	I1019 17:16:17.938172  219832 logs.go:123] Gathering logs for CRI-O ...
	I1019 17:16:17.938205  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1019 17:16:18.012110  219832 logs.go:123] Gathering logs for container status ...
	I1019 17:16:18.012147  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1019 17:16:18.045593  219832 logs.go:123] Gathering logs for kubelet ...
	I1019 17:16:18.045628  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	
	
	==> CRI-O <==
	Oct 19 17:16:09 default-k8s-diff-port-663015 crio[778]: time="2025-10-19T17:16:09.896420362Z" level=info msg="Started container" PID=1863 containerID=3385f3c452cbb7ebfee39ee545d223fa84b36a3a5cdd6a8a5191ca54e10d6de9 description=kube-system/coredns-66bc5c9577-2r8tf/coredns id=6c30e660-261d-4376-9c95-05f1481cc8b3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=508fe25cb5cb2b4b39283b738f403b74bd295a0e74314b6e24867914cacb6b13
	Oct 19 17:16:09 default-k8s-diff-port-663015 crio[778]: time="2025-10-19T17:16:09.896589057Z" level=info msg="Started container" PID=1862 containerID=e686f0a02d86b930a3249aff65a102ff628124db801c8e938a6ec65992edfe9d description=kube-system/storage-provisioner/storage-provisioner id=b284377d-be29-42df-aa51-e5fffe4d7adc name=/runtime.v1.RuntimeService/StartContainer sandboxID=02a447af31081e19b2972ae975c15cb50fe347d11e51548025da7fa4e4461f8f
	Oct 19 17:16:12 default-k8s-diff-port-663015 crio[778]: time="2025-10-19T17:16:12.752686424Z" level=info msg="Running pod sandbox: default/busybox/POD" id=ddb621fa-471d-4bd8-b570-33cfe6a6a6c5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 17:16:12 default-k8s-diff-port-663015 crio[778]: time="2025-10-19T17:16:12.752805217Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:16:12 default-k8s-diff-port-663015 crio[778]: time="2025-10-19T17:16:12.757414008Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:a2388a94a07ff9ad75d9c4e06e571fe00db6867a8cf77a45625789102e144e1e UID:bf66eee5-05b6-4586-8e99-ab43b66c547d NetNS:/var/run/netns/b9859eb8-50aa-495d-ac6b-01df2ff32037 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0005165d8}] Aliases:map[]}"
	Oct 19 17:16:12 default-k8s-diff-port-663015 crio[778]: time="2025-10-19T17:16:12.757441293Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 19 17:16:12 default-k8s-diff-port-663015 crio[778]: time="2025-10-19T17:16:12.767518369Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:a2388a94a07ff9ad75d9c4e06e571fe00db6867a8cf77a45625789102e144e1e UID:bf66eee5-05b6-4586-8e99-ab43b66c547d NetNS:/var/run/netns/b9859eb8-50aa-495d-ac6b-01df2ff32037 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0005165d8}] Aliases:map[]}"
	Oct 19 17:16:12 default-k8s-diff-port-663015 crio[778]: time="2025-10-19T17:16:12.767690476Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 19 17:16:12 default-k8s-diff-port-663015 crio[778]: time="2025-10-19T17:16:12.768474109Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 19 17:16:12 default-k8s-diff-port-663015 crio[778]: time="2025-10-19T17:16:12.769297205Z" level=info msg="Ran pod sandbox a2388a94a07ff9ad75d9c4e06e571fe00db6867a8cf77a45625789102e144e1e with infra container: default/busybox/POD" id=ddb621fa-471d-4bd8-b570-33cfe6a6a6c5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 17:16:12 default-k8s-diff-port-663015 crio[778]: time="2025-10-19T17:16:12.772811544Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7d701e14-acac-4f98-a398-9704cc061517 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:16:12 default-k8s-diff-port-663015 crio[778]: time="2025-10-19T17:16:12.772967724Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=7d701e14-acac-4f98-a398-9704cc061517 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:16:12 default-k8s-diff-port-663015 crio[778]: time="2025-10-19T17:16:12.773015541Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=7d701e14-acac-4f98-a398-9704cc061517 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:16:12 default-k8s-diff-port-663015 crio[778]: time="2025-10-19T17:16:12.773816343Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=db7f1685-5d77-41cd-a30d-5917f04f024d name=/runtime.v1.ImageService/PullImage
	Oct 19 17:16:12 default-k8s-diff-port-663015 crio[778]: time="2025-10-19T17:16:12.775461878Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 19 17:16:13 default-k8s-diff-port-663015 crio[778]: time="2025-10-19T17:16:13.561648133Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=db7f1685-5d77-41cd-a30d-5917f04f024d name=/runtime.v1.ImageService/PullImage
	Oct 19 17:16:13 default-k8s-diff-port-663015 crio[778]: time="2025-10-19T17:16:13.562419362Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6c7cfba2-d8f2-43aa-a9ad-93e5af054a3f name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:16:13 default-k8s-diff-port-663015 crio[778]: time="2025-10-19T17:16:13.563830655Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4d717193-fb33-488a-b372-ce5c6d22db8f name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:16:13 default-k8s-diff-port-663015 crio[778]: time="2025-10-19T17:16:13.567700492Z" level=info msg="Creating container: default/busybox/busybox" id=f1458146-95b4-4c2a-8ede-6136e28da779 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:16:13 default-k8s-diff-port-663015 crio[778]: time="2025-10-19T17:16:13.568587312Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:16:13 default-k8s-diff-port-663015 crio[778]: time="2025-10-19T17:16:13.572599366Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:16:13 default-k8s-diff-port-663015 crio[778]: time="2025-10-19T17:16:13.574956459Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:16:13 default-k8s-diff-port-663015 crio[778]: time="2025-10-19T17:16:13.597689618Z" level=info msg="Created container 64ac6f43a898532567f21208971b9b03da3bd0e0db585615823c27dbd9499ed0: default/busybox/busybox" id=f1458146-95b4-4c2a-8ede-6136e28da779 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:16:13 default-k8s-diff-port-663015 crio[778]: time="2025-10-19T17:16:13.598350045Z" level=info msg="Starting container: 64ac6f43a898532567f21208971b9b03da3bd0e0db585615823c27dbd9499ed0" id=15bb8ce9-b74f-41e7-9a6a-d0d042e19fcf name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 17:16:13 default-k8s-diff-port-663015 crio[778]: time="2025-10-19T17:16:13.600032078Z" level=info msg="Started container" PID=1944 containerID=64ac6f43a898532567f21208971b9b03da3bd0e0db585615823c27dbd9499ed0 description=default/busybox/busybox id=15bb8ce9-b74f-41e7-9a6a-d0d042e19fcf name=/runtime.v1.RuntimeService/StartContainer sandboxID=a2388a94a07ff9ad75d9c4e06e571fe00db6867a8cf77a45625789102e144e1e
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	64ac6f43a8985       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   a2388a94a07ff       busybox                                                default
	3385f3c452cbb       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 seconds ago      Running             coredns                   0                   508fe25cb5cb2       coredns-66bc5c9577-2r8tf                               kube-system
	e686f0a02d86b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   02a447af31081       storage-provisioner                                    kube-system
	9350c53df59a3       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      22 seconds ago      Running             kube-proxy                0                   b45d90f1917f1       kube-proxy-g62dn                                       kube-system
	7c88f87ad1567       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      22 seconds ago      Running             kindnet-cni               0                   efc16b884cb7f       kindnet-rrthg                                          kube-system
	dd35ce0dba2eb       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      32 seconds ago      Running             kube-controller-manager   0                   77e2b611d73ea       kube-controller-manager-default-k8s-diff-port-663015   kube-system
	a665efa541d00       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      32 seconds ago      Running             kube-apiserver            0                   36d4380eb5328       kube-apiserver-default-k8s-diff-port-663015            kube-system
	9b681616bdd0d       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      32 seconds ago      Running             kube-scheduler            0                   bdeca28aa5e8b       kube-scheduler-default-k8s-diff-port-663015            kube-system
	6bbc0735a63e2       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      32 seconds ago      Running             etcd                      0                   213ceb763fc52       etcd-default-k8s-diff-port-663015                      kube-system
	
	
	==> coredns [3385f3c452cbb7ebfee39ee545d223fa84b36a3a5cdd6a8a5191ca54e10d6de9] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35371 - 18271 "HINFO IN 6541089399760033339.2989814994281082383. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.072521709s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-663015
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-663015
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34
	                    minikube.k8s.io/name=default-k8s-diff-port-663015
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T17_15_54_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 17:15:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-663015
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 17:16:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 17:16:09 +0000   Sun, 19 Oct 2025 17:15:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 17:16:09 +0000   Sun, 19 Oct 2025 17:15:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 17:16:09 +0000   Sun, 19 Oct 2025 17:15:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 17:16:09 +0000   Sun, 19 Oct 2025 17:16:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-663015
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                e7d4d908-64b0-4858-bf62-c6148a998433
	  Boot ID:                    74c6737e-3c93-45ee-b810-4373d2d70b9b
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-2r8tf                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     23s
	  kube-system                 etcd-default-k8s-diff-port-663015                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-rrthg                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      23s
	  kube-system                 kube-apiserver-default-k8s-diff-port-663015             250m (3%)     0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-663015    200m (2%)     0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-proxy-g62dn                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	  kube-system                 kube-scheduler-default-k8s-diff-port-663015             100m (1%)     0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 22s   kube-proxy       
	  Normal  Starting                 28s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  28s   kubelet          Node default-k8s-diff-port-663015 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28s   kubelet          Node default-k8s-diff-port-663015 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28s   kubelet          Node default-k8s-diff-port-663015 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           24s   node-controller  Node default-k8s-diff-port-663015 event: Registered Node default-k8s-diff-port-663015 in Controller
	  Normal  NodeReady                12s   kubelet          Node default-k8s-diff-port-663015 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.102617] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027869] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.458770] kauditd_printk_skb: 47 callbacks suppressed
	[Oct19 16:23] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.054620] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023908] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023878] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023848] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023943] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +2.047764] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +4.031517] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +8.127172] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[ +16.382276] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[Oct19 16:24] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	
	
	==> etcd [6bbc0735a63e281baa3736220cd90f24bfa53c367391b5519fd5d7020d683c9b] <==
	{"level":"warn","ts":"2025-10-19T17:15:49.935787Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:49.943784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:49.951212Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:49.962640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:49.977867Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:49.984601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:49.991343Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:49.998035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:50.005305Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:50.012507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:50.019979Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:50.026992Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:50.033260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:50.042468Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:50.049545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:50.056402Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:50.107030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:03.096311Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"113.24382ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-diff-port-663015\" limit:1 ","response":"range_response_count:1 size:5639"}
	{"level":"info","ts":"2025-10-19T17:16:03.096391Z","caller":"traceutil/trace.go:172","msg":"trace[1228788489] range","detail":"{range_begin:/registry/minions/default-k8s-diff-port-663015; range_end:; response_count:1; response_revision:384; }","duration":"113.36095ms","start":"2025-10-19T17:16:02.983017Z","end":"2025-10-19T17:16:03.096378Z","steps":["trace[1228788489] 'range keys from in-memory index tree'  (duration: 113.072586ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-19T17:16:03.587623Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"125.777221ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722596484488085953 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-663015\" mod_revision:240 > success:<request_put:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-663015\" value_size:532 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-663015\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-19T17:16:03.587731Z","caller":"traceutil/trace.go:172","msg":"trace[1709991930] linearizableReadLoop","detail":"{readStateIndex:398; appliedIndex:397; }","duration":"104.154768ms","start":"2025-10-19T17:16:03.483561Z","end":"2025-10-19T17:16:03.587715Z","steps":["trace[1709991930] 'read index received'  (duration: 27.353µs)","trace[1709991930] 'applied index is now lower than readState.Index'  (duration: 104.12621ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-19T17:16:03.587745Z","caller":"traceutil/trace.go:172","msg":"trace[372363897] transaction","detail":"{read_only:false; response_revision:386; number_of_response:1; }","duration":"314.937969ms","start":"2025-10-19T17:16:03.272785Z","end":"2025-10-19T17:16:03.587723Z","steps":["trace[372363897] 'process raft request'  (duration: 188.578413ms)","trace[372363897] 'compare'  (duration: 125.638938ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-19T17:16:03.587865Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"104.301878ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-diff-port-663015\" limit:1 ","response":"range_response_count:1 size:5639"}
	{"level":"info","ts":"2025-10-19T17:16:03.587985Z","caller":"traceutil/trace.go:172","msg":"trace[708147317] range","detail":"{range_begin:/registry/minions/default-k8s-diff-port-663015; range_end:; response_count:1; response_revision:386; }","duration":"104.423698ms","start":"2025-10-19T17:16:03.483550Z","end":"2025-10-19T17:16:03.587974Z","steps":["trace[708147317] 'agreement among raft nodes before linearized reading'  (duration: 104.205637ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-19T17:16:03.587859Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-19T17:16:03.272764Z","time spent":"315.040367ms","remote":"127.0.0.1:46712","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":601,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-663015\" mod_revision:240 > success:<request_put:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-663015\" value_size:532 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-663015\" > >"}
	
	
	==> kernel <==
	 17:16:21 up 58 min,  0 user,  load average: 3.41, 2.95, 1.82
	Linux default-k8s-diff-port-663015 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7c88f87ad1567af5c7c806f0b5b821d57866642c416cdd9b98b709759d8f4054] <==
	I1019 17:15:58.940899       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 17:15:58.941817       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1019 17:15:58.942185       1 main.go:148] setting mtu 1500 for CNI 
	I1019 17:15:58.942208       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 17:15:58.942222       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T17:15:59Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 17:15:59.145310       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 17:15:59.145411       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 17:15:59.145423       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 17:15:59.244435       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1019 17:15:59.646527       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 17:15:59.646552       1 metrics.go:72] Registering metrics
	I1019 17:15:59.646599       1 controller.go:711] "Syncing nftables rules"
	I1019 17:16:09.145602       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 17:16:09.145663       1 main.go:301] handling current node
	I1019 17:16:19.148980       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 17:16:19.149013       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a665efa541d006936088ec4059a909aa63c272cf06ce3b8fa89e70b2d7aa7b7a] <==
	I1019 17:15:50.591328       1 cache.go:39] Caches are synced for autoregister controller
	I1019 17:15:50.591901       1 controller.go:667] quota admission added evaluator for: namespaces
	I1019 17:15:50.599427       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1019 17:15:50.599900       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 17:15:50.605132       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 17:15:50.606107       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1019 17:15:50.770618       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 17:15:51.494862       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1019 17:15:51.498390       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1019 17:15:51.498410       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 17:15:51.991366       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 17:15:52.034677       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 17:15:52.101855       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1019 17:15:52.109820       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1019 17:15:52.110903       1 controller.go:667] quota admission added evaluator for: endpoints
	I1019 17:15:52.115866       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 17:15:52.531903       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 17:15:53.343036       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1019 17:15:53.355089       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1019 17:15:53.363598       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1019 17:15:58.183955       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1019 17:15:58.334357       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1019 17:15:58.636824       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 17:15:58.641475       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1019 17:16:19.551418       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:35158: use of closed network connection
	
	
	==> kube-controller-manager [dd35ce0dba2ebcd25fd233fa9a2717107bad5bcfa32c1759866b4e3185ed3468] <==
	I1019 17:15:57.532861       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1019 17:15:57.532892       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1019 17:15:57.532922       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1019 17:15:57.532943       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1019 17:15:57.532951       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1019 17:15:57.534112       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1019 17:15:57.534559       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1019 17:15:57.534646       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1019 17:15:57.534727       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1019 17:15:57.534780       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1019 17:15:57.534791       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1019 17:15:57.534798       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1019 17:15:57.536737       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1019 17:15:57.536866       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1019 17:15:57.536886       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 17:15:57.536959       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-663015"
	I1019 17:15:57.537010       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1019 17:15:57.537948       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 17:15:57.537965       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1019 17:15:57.537974       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1019 17:15:57.539929       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1019 17:15:57.541256       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-663015" podCIDRs=["10.244.0.0/24"]
	I1019 17:15:57.545662       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1019 17:15:57.553537       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 17:16:12.540501       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [9350c53df59a35b46c1928145c29ba4ad4220b9d3231a64cb6138bf7b1af7a0f] <==
	I1019 17:15:58.786093       1 server_linux.go:53] "Using iptables proxy"
	I1019 17:15:58.839062       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 17:15:58.940419       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 17:15:58.940488       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1019 17:15:58.940746       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 17:15:58.996372       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 17:15:58.996453       1 server_linux.go:132] "Using iptables Proxier"
	I1019 17:15:59.008217       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 17:15:59.009058       1 server.go:527] "Version info" version="v1.34.1"
	I1019 17:15:59.009153       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:15:59.010946       1 config.go:200] "Starting service config controller"
	I1019 17:15:59.010964       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 17:15:59.010990       1 config.go:106] "Starting endpoint slice config controller"
	I1019 17:15:59.010997       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 17:15:59.011011       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 17:15:59.011016       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 17:15:59.011244       1 config.go:309] "Starting node config controller"
	I1019 17:15:59.011267       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 17:15:59.112186       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 17:15:59.112237       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 17:15:59.112271       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1019 17:15:59.112806       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [9b681616bdd0ded58a6cc4f7ca9f3822873ced27f4afd31afa13d7abe8d41edd] <==
	E1019 17:15:50.549200       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1019 17:15:50.549375       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1019 17:15:50.549401       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1019 17:15:50.549468       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1019 17:15:50.549583       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1019 17:15:50.549648       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1019 17:15:50.549707       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1019 17:15:50.549743       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1019 17:15:50.549791       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1019 17:15:50.550206       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1019 17:15:50.550263       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1019 17:15:51.402231       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1019 17:15:51.452923       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 17:15:51.467759       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1019 17:15:51.473965       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1019 17:15:51.493977       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1019 17:15:51.563852       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1019 17:15:51.588037       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1019 17:15:51.597224       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1019 17:15:51.631478       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1019 17:15:51.730912       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1019 17:15:51.770332       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1019 17:15:51.785680       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1019 17:15:51.813984       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1019 17:15:53.846268       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 19 17:15:54 default-k8s-diff-port-663015 kubelet[1334]: I1019 17:15:54.273866    1334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-663015" podStartSLOduration=3.273844781 podStartE2EDuration="3.273844781s" podCreationTimestamp="2025-10-19 17:15:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:15:54.273840853 +0000 UTC m=+1.170684750" watchObservedRunningTime="2025-10-19 17:15:54.273844781 +0000 UTC m=+1.170688677"
	Oct 19 17:15:54 default-k8s-diff-port-663015 kubelet[1334]: I1019 17:15:54.274054    1334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-663015" podStartSLOduration=1.274039462 podStartE2EDuration="1.274039462s" podCreationTimestamp="2025-10-19 17:15:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:15:54.258383038 +0000 UTC m=+1.155226935" watchObservedRunningTime="2025-10-19 17:15:54.274039462 +0000 UTC m=+1.170883357"
	Oct 19 17:15:54 default-k8s-diff-port-663015 kubelet[1334]: I1019 17:15:54.285250    1334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-663015" podStartSLOduration=1.2852292699999999 podStartE2EDuration="1.28522927s" podCreationTimestamp="2025-10-19 17:15:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:15:54.284953518 +0000 UTC m=+1.181797415" watchObservedRunningTime="2025-10-19 17:15:54.28522927 +0000 UTC m=+1.182073165"
	Oct 19 17:15:54 default-k8s-diff-port-663015 kubelet[1334]: I1019 17:15:54.304881    1334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-663015" podStartSLOduration=1.304858678 podStartE2EDuration="1.304858678s" podCreationTimestamp="2025-10-19 17:15:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:15:54.295518848 +0000 UTC m=+1.192362744" watchObservedRunningTime="2025-10-19 17:15:54.304858678 +0000 UTC m=+1.201702571"
	Oct 19 17:15:57 default-k8s-diff-port-663015 kubelet[1334]: I1019 17:15:57.626622    1334 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 19 17:15:57 default-k8s-diff-port-663015 kubelet[1334]: I1019 17:15:57.627468    1334 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 19 17:15:58 default-k8s-diff-port-663015 kubelet[1334]: I1019 17:15:58.417722    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbc7v\" (UniqueName: \"kubernetes.io/projected/4d236960-9ec3-445d-98fd-d04af4ea465f-kube-api-access-gbc7v\") pod \"kindnet-rrthg\" (UID: \"4d236960-9ec3-445d-98fd-d04af4ea465f\") " pod="kube-system/kindnet-rrthg"
	Oct 19 17:15:58 default-k8s-diff-port-663015 kubelet[1334]: I1019 17:15:58.417821    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0096be5f-f9a5-4aab-a41f-67004f646d90-kube-proxy\") pod \"kube-proxy-g62dn\" (UID: \"0096be5f-f9a5-4aab-a41f-67004f646d90\") " pod="kube-system/kube-proxy-g62dn"
	Oct 19 17:15:58 default-k8s-diff-port-663015 kubelet[1334]: I1019 17:15:58.417853    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7mnn\" (UniqueName: \"kubernetes.io/projected/0096be5f-f9a5-4aab-a41f-67004f646d90-kube-api-access-n7mnn\") pod \"kube-proxy-g62dn\" (UID: \"0096be5f-f9a5-4aab-a41f-67004f646d90\") " pod="kube-system/kube-proxy-g62dn"
	Oct 19 17:15:58 default-k8s-diff-port-663015 kubelet[1334]: I1019 17:15:58.417892    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/4d236960-9ec3-445d-98fd-d04af4ea465f-cni-cfg\") pod \"kindnet-rrthg\" (UID: \"4d236960-9ec3-445d-98fd-d04af4ea465f\") " pod="kube-system/kindnet-rrthg"
	Oct 19 17:15:58 default-k8s-diff-port-663015 kubelet[1334]: I1019 17:15:58.417914    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4d236960-9ec3-445d-98fd-d04af4ea465f-lib-modules\") pod \"kindnet-rrthg\" (UID: \"4d236960-9ec3-445d-98fd-d04af4ea465f\") " pod="kube-system/kindnet-rrthg"
	Oct 19 17:15:58 default-k8s-diff-port-663015 kubelet[1334]: I1019 17:15:58.417941    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0096be5f-f9a5-4aab-a41f-67004f646d90-lib-modules\") pod \"kube-proxy-g62dn\" (UID: \"0096be5f-f9a5-4aab-a41f-67004f646d90\") " pod="kube-system/kube-proxy-g62dn"
	Oct 19 17:15:58 default-k8s-diff-port-663015 kubelet[1334]: I1019 17:15:58.417972    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0096be5f-f9a5-4aab-a41f-67004f646d90-xtables-lock\") pod \"kube-proxy-g62dn\" (UID: \"0096be5f-f9a5-4aab-a41f-67004f646d90\") " pod="kube-system/kube-proxy-g62dn"
	Oct 19 17:15:58 default-k8s-diff-port-663015 kubelet[1334]: I1019 17:15:58.418106    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4d236960-9ec3-445d-98fd-d04af4ea465f-xtables-lock\") pod \"kindnet-rrthg\" (UID: \"4d236960-9ec3-445d-98fd-d04af4ea465f\") " pod="kube-system/kindnet-rrthg"
	Oct 19 17:15:59 default-k8s-diff-port-663015 kubelet[1334]: I1019 17:15:59.265277    1334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-rrthg" podStartSLOduration=1.265248012 podStartE2EDuration="1.265248012s" podCreationTimestamp="2025-10-19 17:15:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:15:59.251595171 +0000 UTC m=+6.148439068" watchObservedRunningTime="2025-10-19 17:15:59.265248012 +0000 UTC m=+6.162091909"
	Oct 19 17:15:59 default-k8s-diff-port-663015 kubelet[1334]: I1019 17:15:59.265435    1334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-g62dn" podStartSLOduration=1.265427364 podStartE2EDuration="1.265427364s" podCreationTimestamp="2025-10-19 17:15:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:15:59.265001724 +0000 UTC m=+6.161845623" watchObservedRunningTime="2025-10-19 17:15:59.265427364 +0000 UTC m=+6.162271263"
	Oct 19 17:16:09 default-k8s-diff-port-663015 kubelet[1334]: I1019 17:16:09.513913    1334 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 19 17:16:09 default-k8s-diff-port-663015 kubelet[1334]: I1019 17:16:09.599080    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad80fc3e-0eba-4bcd-a0bb-5b0ffbcc9d70-config-volume\") pod \"coredns-66bc5c9577-2r8tf\" (UID: \"ad80fc3e-0eba-4bcd-a0bb-5b0ffbcc9d70\") " pod="kube-system/coredns-66bc5c9577-2r8tf"
	Oct 19 17:16:09 default-k8s-diff-port-663015 kubelet[1334]: I1019 17:16:09.599179    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c394d5d6-2e4b-4d29-8a5d-cdf33dcbba74-tmp\") pod \"storage-provisioner\" (UID: \"c394d5d6-2e4b-4d29-8a5d-cdf33dcbba74\") " pod="kube-system/storage-provisioner"
	Oct 19 17:16:09 default-k8s-diff-port-663015 kubelet[1334]: I1019 17:16:09.599302    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l856t\" (UniqueName: \"kubernetes.io/projected/ad80fc3e-0eba-4bcd-a0bb-5b0ffbcc9d70-kube-api-access-l856t\") pod \"coredns-66bc5c9577-2r8tf\" (UID: \"ad80fc3e-0eba-4bcd-a0bb-5b0ffbcc9d70\") " pod="kube-system/coredns-66bc5c9577-2r8tf"
	Oct 19 17:16:09 default-k8s-diff-port-663015 kubelet[1334]: I1019 17:16:09.599348    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6cxr\" (UniqueName: \"kubernetes.io/projected/c394d5d6-2e4b-4d29-8a5d-cdf33dcbba74-kube-api-access-k6cxr\") pod \"storage-provisioner\" (UID: \"c394d5d6-2e4b-4d29-8a5d-cdf33dcbba74\") " pod="kube-system/storage-provisioner"
	Oct 19 17:16:10 default-k8s-diff-port-663015 kubelet[1334]: I1019 17:16:10.273052    1334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=11.273027682 podStartE2EDuration="11.273027682s" podCreationTimestamp="2025-10-19 17:15:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:16:10.272838004 +0000 UTC m=+17.169681900" watchObservedRunningTime="2025-10-19 17:16:10.273027682 +0000 UTC m=+17.169871579"
	Oct 19 17:16:12 default-k8s-diff-port-663015 kubelet[1334]: I1019 17:16:12.445490    1334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-2r8tf" podStartSLOduration=14.445463266 podStartE2EDuration="14.445463266s" podCreationTimestamp="2025-10-19 17:15:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:16:10.284441293 +0000 UTC m=+17.181285190" watchObservedRunningTime="2025-10-19 17:16:12.445463266 +0000 UTC m=+19.342307163"
	Oct 19 17:16:12 default-k8s-diff-port-663015 kubelet[1334]: I1019 17:16:12.518800    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z29m6\" (UniqueName: \"kubernetes.io/projected/bf66eee5-05b6-4586-8e99-ab43b66c547d-kube-api-access-z29m6\") pod \"busybox\" (UID: \"bf66eee5-05b6-4586-8e99-ab43b66c547d\") " pod="default/busybox"
	Oct 19 17:16:14 default-k8s-diff-port-663015 kubelet[1334]: I1019 17:16:14.286351    1334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.496421858 podStartE2EDuration="2.286315411s" podCreationTimestamp="2025-10-19 17:16:12 +0000 UTC" firstStartedPulling="2025-10-19 17:16:12.773385373 +0000 UTC m=+19.670229249" lastFinishedPulling="2025-10-19 17:16:13.563278926 +0000 UTC m=+20.460122802" observedRunningTime="2025-10-19 17:16:14.285967899 +0000 UTC m=+21.182811796" watchObservedRunningTime="2025-10-19 17:16:14.286315411 +0000 UTC m=+21.183159308"
	
	
	==> storage-provisioner [e686f0a02d86b930a3249aff65a102ff628124db801c8e938a6ec65992edfe9d] <==
	I1019 17:16:09.909892       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1019 17:16:09.918588       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1019 17:16:09.918638       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1019 17:16:09.921198       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:16:09.925671       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 17:16:09.925876       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1019 17:16:09.926043       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-663015_515cd59c-b436-48ab-bd75-64f04084929b!
	I1019 17:16:09.926020       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e5184909-d0d7-4566-badd-0d775b85f21e", APIVersion:"v1", ResourceVersion:"406", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-663015_515cd59c-b436-48ab-bd75-64f04084929b became leader
	W1019 17:16:09.928709       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:16:09.933806       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 17:16:10.027165       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-663015_515cd59c-b436-48ab-bd75-64f04084929b!
	W1019 17:16:11.937182       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:16:11.943338       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:16:13.946575       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:16:13.951839       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:16:15.954877       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:16:15.959147       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:16:17.963583       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:16:17.968020       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:16:19.972390       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:16:19.977796       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-663015 -n default-k8s-diff-port-663015
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-663015 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.98s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-848035 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-848035 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (256.353286ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:16:27Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-848035 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-848035
helpers_test.go:243: (dbg) docker inspect newest-cni-848035:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d4878977e53a743ea778cb864c7df411e59f7f981a676f0796cbd728924f7077",
	        "Created": "2025-10-19T17:16:04.614467412Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 263623,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T17:16:04.661908627Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/d4878977e53a743ea778cb864c7df411e59f7f981a676f0796cbd728924f7077/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d4878977e53a743ea778cb864c7df411e59f7f981a676f0796cbd728924f7077/hostname",
	        "HostsPath": "/var/lib/docker/containers/d4878977e53a743ea778cb864c7df411e59f7f981a676f0796cbd728924f7077/hosts",
	        "LogPath": "/var/lib/docker/containers/d4878977e53a743ea778cb864c7df411e59f7f981a676f0796cbd728924f7077/d4878977e53a743ea778cb864c7df411e59f7f981a676f0796cbd728924f7077-json.log",
	        "Name": "/newest-cni-848035",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-848035:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-848035",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d4878977e53a743ea778cb864c7df411e59f7f981a676f0796cbd728924f7077",
	                "LowerDir": "/var/lib/docker/overlay2/0f8e59503f105b74cc854aa5854cdd73481fc5e056e7a00d02958c6db21e7382-init/diff:/var/lib/docker/overlay2/0516a6de822f96a429e000bb6523437ca93a66a6aecaba561c6abf45d0d1defe/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0f8e59503f105b74cc854aa5854cdd73481fc5e056e7a00d02958c6db21e7382/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0f8e59503f105b74cc854aa5854cdd73481fc5e056e7a00d02958c6db21e7382/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0f8e59503f105b74cc854aa5854cdd73481fc5e056e7a00d02958c6db21e7382/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-848035",
	                "Source": "/var/lib/docker/volumes/newest-cni-848035/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-848035",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-848035",
	                "name.minikube.sigs.k8s.io": "newest-cni-848035",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c25c854c31e65c4c6e482ee731bbcea572881189d5917cd5f316c287e329edc0",
	            "SandboxKey": "/var/run/docker/netns/c25c854c31e6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-848035": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0a:04:22:1c:f6:f6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bd85ff71a8849a849b2f5448c4af9d5d2d209e0b42263ef0a6ae677b20846d2a",
	                    "EndpointID": "ee77a77be1a03d3080d871ee27a6c487d541c35c93cb269d69ab0a3a78347486",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-848035",
	                        "d4878977e53a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-848035 -n newest-cni-848035
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-848035 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p no-preload-806996 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-806996            │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │                     │
	│ stop    │ -p no-preload-806996 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-806996            │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │ 19 Oct 25 17:14 UTC │
	│ addons  │ enable dashboard -p no-preload-806996 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-806996            │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │ 19 Oct 25 17:14 UTC │
	│ start   │ -p no-preload-806996 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-806996            │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │ 19 Oct 25 17:15 UTC │
	│ start   │ -p cert-expiration-132648 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-132648       │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ delete  │ -p cert-expiration-132648                                                                                                                                                                                                                     │ cert-expiration-132648       │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ start   │ -p embed-certs-090139 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-090139           │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ image   │ old-k8s-version-904967 image list --format=json                                                                                                                                                                                               │ old-k8s-version-904967       │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ pause   │ -p old-k8s-version-904967 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-904967       │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │                     │
	│ delete  │ -p old-k8s-version-904967                                                                                                                                                                                                                     │ old-k8s-version-904967       │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ delete  │ -p old-k8s-version-904967                                                                                                                                                                                                                     │ old-k8s-version-904967       │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ delete  │ -p disable-driver-mounts-858297                                                                                                                                                                                                               │ disable-driver-mounts-858297 │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ start   │ -p default-k8s-diff-port-663015 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-663015 │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:16 UTC │
	│ image   │ no-preload-806996 image list --format=json                                                                                                                                                                                                    │ no-preload-806996            │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ pause   │ -p no-preload-806996 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-806996            │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │                     │
	│ delete  │ -p no-preload-806996                                                                                                                                                                                                                          │ no-preload-806996            │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ delete  │ -p no-preload-806996                                                                                                                                                                                                                          │ no-preload-806996            │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ start   │ -p newest-cni-848035 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-848035            │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:16 UTC │
	│ addons  │ enable metrics-server -p embed-certs-090139 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-090139           │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ stop    │ -p embed-certs-090139 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-090139           │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:16 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-663015 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-663015 │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-663015 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-663015 │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-090139 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-090139           │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:16 UTC │
	│ start   │ -p embed-certs-090139 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-090139           │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-848035 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-848035            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 17:16:24
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 17:16:24.527107  268862 out.go:360] Setting OutFile to fd 1 ...
	I1019 17:16:24.527386  268862 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:16:24.527396  268862 out.go:374] Setting ErrFile to fd 2...
	I1019 17:16:24.527400  268862 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:16:24.527648  268862 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3731/.minikube/bin
	I1019 17:16:24.528148  268862 out.go:368] Setting JSON to false
	I1019 17:16:24.529372  268862 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3531,"bootTime":1760890654,"procs":322,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 17:16:24.529457  268862 start.go:143] virtualization: kvm guest
	I1019 17:16:24.531345  268862 out.go:179] * [embed-certs-090139] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 17:16:24.532919  268862 notify.go:221] Checking for updates...
	I1019 17:16:24.532940  268862 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 17:16:24.534295  268862 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 17:16:24.535724  268862 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-3731/kubeconfig
	I1019 17:16:24.536953  268862 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-3731/.minikube
	I1019 17:16:24.538236  268862 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 17:16:24.539667  268862 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 17:16:24.541417  268862 config.go:182] Loaded profile config "embed-certs-090139": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:16:24.542013  268862 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 17:16:24.569281  268862 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1019 17:16:24.569441  268862 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:16:24.632387  268862 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:75 OomKillDisable:false NGoroutines:84 SystemTime:2025-10-19 17:16:24.62189623 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 17:16:24.632484  268862 docker.go:319] overlay module found
	I1019 17:16:24.634274  268862 out.go:179] * Using the docker driver based on existing profile
	I1019 17:16:24.635666  268862 start.go:309] selected driver: docker
	I1019 17:16:24.635683  268862 start.go:930] validating driver "docker" against &{Name:embed-certs-090139 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-090139 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:16:24.635793  268862 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 17:16:24.636479  268862 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:16:24.712195  268862 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:75 OomKillDisable:false NGoroutines:84 SystemTime:2025-10-19 17:16:24.701540348 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 17:16:24.712483  268862 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 17:16:24.712511  268862 cni.go:84] Creating CNI manager for ""
	I1019 17:16:24.712554  268862 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:16:24.712585  268862 start.go:353] cluster config:
	{Name:embed-certs-090139 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-090139 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:16:24.714461  268862 out.go:179] * Starting "embed-certs-090139" primary control-plane node in "embed-certs-090139" cluster
	I1019 17:16:24.716667  268862 cache.go:124] Beginning downloading kic base image for docker with crio
	I1019 17:16:24.718010  268862 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 17:16:24.719204  268862 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:16:24.719240  268862 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 17:16:24.719259  268862 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-3731/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1019 17:16:24.719286  268862 cache.go:59] Caching tarball of preloaded images
	I1019 17:16:24.719388  268862 preload.go:233] Found /home/jenkins/minikube-integration/21683-3731/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1019 17:16:24.719405  268862 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 17:16:24.719519  268862 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/embed-certs-090139/config.json ...
	I1019 17:16:24.740501  268862 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 17:16:24.740520  268862 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 17:16:24.740535  268862 cache.go:233] Successfully downloaded all kic artifacts
	I1019 17:16:24.740558  268862 start.go:360] acquireMachinesLock for embed-certs-090139: {Name:mkdaa028ca10b90b55fac4626a0f749931b30e56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 17:16:24.740612  268862 start.go:364] duration metric: took 36.987µs to acquireMachinesLock for "embed-certs-090139"
	I1019 17:16:24.740629  268862 start.go:96] Skipping create...Using existing machine configuration
	I1019 17:16:24.740636  268862 fix.go:54] fixHost starting: 
	I1019 17:16:24.740843  268862 cli_runner.go:164] Run: docker container inspect embed-certs-090139 --format={{.State.Status}}
	I1019 17:16:24.759248  268862 fix.go:112] recreateIfNeeded on embed-certs-090139: state=Stopped err=<nil>
	W1019 17:16:24.759294  268862 fix.go:138] unexpected machine state, will restart: <nil>
	I1019 17:16:24.542183  262636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:16:25.042744  262636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:16:25.542182  262636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:16:26.042215  262636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:16:26.110156  262636 kubeadm.go:1114] duration metric: took 4.173200051s to wait for elevateKubeSystemPrivileges
	I1019 17:16:26.110193  262636 kubeadm.go:403] duration metric: took 13.949159868s to StartCluster
	I1019 17:16:26.110211  262636 settings.go:142] acquiring lock: {Name:mk205bd8d663b4a1850184d0f589700c0d2429b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:16:26.110273  262636 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-3731/kubeconfig
	I1019 17:16:26.111421  262636 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/kubeconfig: {Name:mk4cdfafbafeae33568865a60cf929c656d55c5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:16:26.111662  262636 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 17:16:26.111697  262636 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1019 17:16:26.111716  262636 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 17:16:26.111822  262636 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-848035"
	I1019 17:16:26.111842  262636 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-848035"
	I1019 17:16:26.111856  262636 addons.go:70] Setting default-storageclass=true in profile "newest-cni-848035"
	I1019 17:16:26.111874  262636 host.go:66] Checking if "newest-cni-848035" exists ...
	I1019 17:16:26.111888  262636 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-848035"
	I1019 17:16:26.111934  262636 config.go:182] Loaded profile config "newest-cni-848035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:16:26.112275  262636 cli_runner.go:164] Run: docker container inspect newest-cni-848035 --format={{.State.Status}}
	I1019 17:16:26.112408  262636 cli_runner.go:164] Run: docker container inspect newest-cni-848035 --format={{.State.Status}}
	I1019 17:16:26.113611  262636 out.go:179] * Verifying Kubernetes components...
	I1019 17:16:26.114977  262636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:16:26.138312  262636 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 17:16:26.138391  262636 addons.go:239] Setting addon default-storageclass=true in "newest-cni-848035"
	I1019 17:16:26.138432  262636 host.go:66] Checking if "newest-cni-848035" exists ...
	I1019 17:16:26.138902  262636 cli_runner.go:164] Run: docker container inspect newest-cni-848035 --format={{.State.Status}}
	I1019 17:16:26.139701  262636 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:16:26.139732  262636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 17:16:26.139784  262636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-848035
	I1019 17:16:26.173636  262636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/newest-cni-848035/id_rsa Username:docker}
	I1019 17:16:26.174047  262636 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 17:16:26.174084  262636 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 17:16:26.174156  262636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-848035
	I1019 17:16:26.199478  262636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/newest-cni-848035/id_rsa Username:docker}
	I1019 17:16:26.210681  262636 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1019 17:16:26.258523  262636 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:16:26.290577  262636 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:16:26.310479  262636 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 17:16:26.402438  262636 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1019 17:16:26.403642  262636 api_server.go:52] waiting for apiserver process to appear ...
	I1019 17:16:26.403695  262636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 17:16:26.597415  262636 api_server.go:72] duration metric: took 485.715324ms to wait for apiserver process to appear ...
	I1019 17:16:26.597444  262636 api_server.go:88] waiting for apiserver healthz status ...
	I1019 17:16:26.597465  262636 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1019 17:16:26.602724  262636 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1019 17:16:26.603548  262636 api_server.go:141] control plane version: v1.34.1
	I1019 17:16:26.603579  262636 api_server.go:131] duration metric: took 6.126591ms to wait for apiserver health ...
	I1019 17:16:26.603589  262636 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 17:16:26.606283  262636 system_pods.go:59] 7 kube-system pods found
	I1019 17:16:26.606328  262636 system_pods.go:61] "etcd-newest-cni-848035" [d4f66958-0c51-495b-9982-fbc5fa2eaf5c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 17:16:26.606341  262636 system_pods.go:61] "kindnet-cldtb" [3371a4c2-e7be-4f7c-9e77-69cce40a6458] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1019 17:16:26.606352  262636 system_pods.go:61] "kube-apiserver-newest-cni-848035" [0381a31e-6a3a-48c8-acb0-905da2c9e2c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 17:16:26.606361  262636 system_pods.go:61] "kube-controller-manager-newest-cni-848035" [26b369eb-d2b2-488b-afd2-8958a5a8f955] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 17:16:26.606370  262636 system_pods.go:61] "kube-proxy-4xgrb" [f332f5bc-a940-414b-816d-fe262c303b5a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1019 17:16:26.606385  262636 system_pods.go:61] "kube-scheduler-newest-cni-848035" [22b426e0-aafb-4a62-9535-894b75da5f59] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 17:16:26.606392  262636 system_pods.go:61] "storage-provisioner" [b4254e2b-7d6f-4957-a6c4-81ea56715968] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1019 17:16:26.606404  262636 system_pods.go:74] duration metric: took 2.807796ms to wait for pod list to return data ...
	I1019 17:16:26.606418  262636 default_sa.go:34] waiting for default service account to be created ...
	I1019 17:16:26.606415  262636 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1019 17:16:26.608589  262636 addons.go:515] duration metric: took 496.874424ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1019 17:16:26.609875  262636 default_sa.go:45] found service account: "default"
	I1019 17:16:26.609898  262636 default_sa.go:55] duration metric: took 3.463724ms for default service account to be created ...
	I1019 17:16:26.609911  262636 kubeadm.go:587] duration metric: took 498.217041ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1019 17:16:26.609932  262636 node_conditions.go:102] verifying NodePressure condition ...
	I1019 17:16:26.612152  262636 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1019 17:16:26.612187  262636 node_conditions.go:123] node cpu capacity is 8
	I1019 17:16:26.612202  262636 node_conditions.go:105] duration metric: took 2.264547ms to run NodePressure ...
	I1019 17:16:26.612213  262636 start.go:242] waiting for startup goroutines ...
	I1019 17:16:26.906931  262636 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-848035" context rescaled to 1 replicas
	I1019 17:16:26.906969  262636 start.go:247] waiting for cluster config update ...
	I1019 17:16:26.906981  262636 start.go:256] writing updated cluster config ...
	I1019 17:16:26.907344  262636 ssh_runner.go:195] Run: rm -f paused
	I1019 17:16:26.963425  262636 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1019 17:16:26.965749  262636 out.go:179] * Done! kubectl is now configured to use "newest-cni-848035" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 19 17:16:26 newest-cni-848035 crio[788]: time="2025-10-19T17:16:26.656943162Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 19 17:16:26 newest-cni-848035 crio[788]: time="2025-10-19T17:16:26.658201955Z" level=info msg="Running pod sandbox: kube-system/kindnet-cldtb/POD" id=3b10c7d5-9dfd-4ade-a90a-a448b415d173 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 17:16:26 newest-cni-848035 crio[788]: time="2025-10-19T17:16:26.658302795Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:16:26 newest-cni-848035 crio[788]: time="2025-10-19T17:16:26.658390763Z" level=info msg="Ran pod sandbox e4a2485ffd72dac1fd89d9acfd1c05d2f0c22955f196019262cb7373c9a6ad60 with infra container: kube-system/kube-proxy-4xgrb/POD" id=f0f33712-467b-4113-822b-c0466fd0d475 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 17:16:26 newest-cni-848035 crio[788]: time="2025-10-19T17:16:26.659723412Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=b9e17719-d90a-481b-a376-b62716cb4c93 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:16:26 newest-cni-848035 crio[788]: time="2025-10-19T17:16:26.661299691Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=3b10c7d5-9dfd-4ade-a90a-a448b415d173 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 17:16:26 newest-cni-848035 crio[788]: time="2025-10-19T17:16:26.661343349Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=21b61ac4-f2b3-4239-822b-14ec1201c902 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:16:26 newest-cni-848035 crio[788]: time="2025-10-19T17:16:26.663983125Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 19 17:16:26 newest-cni-848035 crio[788]: time="2025-10-19T17:16:26.664972582Z" level=info msg="Ran pod sandbox 04896c66ab6a63ca197e345fd94c1704166d31dede85bc5579091e0007e97f87 with infra container: kube-system/kindnet-cldtb/POD" id=3b10c7d5-9dfd-4ade-a90a-a448b415d173 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 17:16:26 newest-cni-848035 crio[788]: time="2025-10-19T17:16:26.666050514Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=bdf23b28-bb13-47e8-bb01-f938018df846 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:16:26 newest-cni-848035 crio[788]: time="2025-10-19T17:16:26.666560263Z" level=info msg="Creating container: kube-system/kube-proxy-4xgrb/kube-proxy" id=e32ef643-5030-4cf5-b4c8-220524ad5bf8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:16:26 newest-cni-848035 crio[788]: time="2025-10-19T17:16:26.666883442Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:16:26 newest-cni-848035 crio[788]: time="2025-10-19T17:16:26.66692847Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=d1485b25-4852-40ce-a9b4-5b1a59c8671f name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:16:26 newest-cni-848035 crio[788]: time="2025-10-19T17:16:26.6719434Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:16:26 newest-cni-848035 crio[788]: time="2025-10-19T17:16:26.67283596Z" level=info msg="Creating container: kube-system/kindnet-cldtb/kindnet-cni" id=149c8755-037d-42c5-a143-afdab9608244 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:16:26 newest-cni-848035 crio[788]: time="2025-10-19T17:16:26.673111136Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:16:26 newest-cni-848035 crio[788]: time="2025-10-19T17:16:26.673957156Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:16:26 newest-cni-848035 crio[788]: time="2025-10-19T17:16:26.678766326Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:16:26 newest-cni-848035 crio[788]: time="2025-10-19T17:16:26.679391995Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:16:26 newest-cni-848035 crio[788]: time="2025-10-19T17:16:26.704830318Z" level=info msg="Created container 745b5c1e450a9cc2e54b5d9f6602face0e7a97b7785fffaef4c21352c828aaf7: kube-system/kindnet-cldtb/kindnet-cni" id=149c8755-037d-42c5-a143-afdab9608244 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:16:26 newest-cni-848035 crio[788]: time="2025-10-19T17:16:26.705716422Z" level=info msg="Starting container: 745b5c1e450a9cc2e54b5d9f6602face0e7a97b7785fffaef4c21352c828aaf7" id=26c19c57-904d-4946-919a-826542b0244b name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 17:16:26 newest-cni-848035 crio[788]: time="2025-10-19T17:16:26.707829483Z" level=info msg="Started container" PID=1598 containerID=745b5c1e450a9cc2e54b5d9f6602face0e7a97b7785fffaef4c21352c828aaf7 description=kube-system/kindnet-cldtb/kindnet-cni id=26c19c57-904d-4946-919a-826542b0244b name=/runtime.v1.RuntimeService/StartContainer sandboxID=04896c66ab6a63ca197e345fd94c1704166d31dede85bc5579091e0007e97f87
	Oct 19 17:16:26 newest-cni-848035 crio[788]: time="2025-10-19T17:16:26.709337003Z" level=info msg="Created container 4785476d50d682eb41bd8da287e4ae04f133c7ab692ffdf670081ce58c7d4d75: kube-system/kube-proxy-4xgrb/kube-proxy" id=e32ef643-5030-4cf5-b4c8-220524ad5bf8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:16:26 newest-cni-848035 crio[788]: time="2025-10-19T17:16:26.710101088Z" level=info msg="Starting container: 4785476d50d682eb41bd8da287e4ae04f133c7ab692ffdf670081ce58c7d4d75" id=0cefa6ce-beea-49f6-b5c7-2dc88f67a899 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 17:16:26 newest-cni-848035 crio[788]: time="2025-10-19T17:16:26.713690527Z" level=info msg="Started container" PID=1597 containerID=4785476d50d682eb41bd8da287e4ae04f133c7ab692ffdf670081ce58c7d4d75 description=kube-system/kube-proxy-4xgrb/kube-proxy id=0cefa6ce-beea-49f6-b5c7-2dc88f67a899 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e4a2485ffd72dac1fd89d9acfd1c05d2f0c22955f196019262cb7373c9a6ad60
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	745b5c1e450a9       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   1 second ago        Running             kindnet-cni               0                   04896c66ab6a6       kindnet-cldtb                               kube-system
	4785476d50d68       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   1 second ago        Running             kube-proxy                0                   e4a2485ffd72d       kube-proxy-4xgrb                            kube-system
	759575d96a145       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   11 seconds ago      Running             kube-scheduler            0                   70f625878c7f0       kube-scheduler-newest-cni-848035            kube-system
	b4be8e69a9400       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   11 seconds ago      Running             kube-controller-manager   0                   301b1a597d87f       kube-controller-manager-newest-cni-848035   kube-system
	b47fd67f6a5ed       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   11 seconds ago      Running             kube-apiserver            0                   a39e2cca559b5       kube-apiserver-newest-cni-848035            kube-system
	753bde07a315c       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   11 seconds ago      Running             etcd                      0                   9567e590594b2       etcd-newest-cni-848035                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-848035
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-848035
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34
	                    minikube.k8s.io/name=newest-cni-848035
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T17_16_21_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 17:16:18 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-848035
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 17:16:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 17:16:21 +0000   Sun, 19 Oct 2025 17:16:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 17:16:21 +0000   Sun, 19 Oct 2025 17:16:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 17:16:21 +0000   Sun, 19 Oct 2025 17:16:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 19 Oct 2025 17:16:21 +0000   Sun, 19 Oct 2025 17:16:16 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-848035
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                646096bf-34be-4122-8013-0d1b140e3606
	  Boot ID:                    74c6737e-3c93-45ee-b810-4373d2d70b9b
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-848035                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         9s
	  kube-system                 kindnet-cldtb                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2s
	  kube-system                 kube-apiserver-newest-cni-848035             250m (3%)     0 (0%)      0 (0%)           0 (0%)         7s
	  kube-system                 kube-controller-manager-newest-cni-848035    200m (2%)     0 (0%)      0 (0%)           0 (0%)         7s
	  kube-system                 kube-proxy-4xgrb                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kube-system                 kube-scheduler-newest-cni-848035             100m (1%)     0 (0%)      0 (0%)           0 (0%)         7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 1s    kube-proxy       
	  Normal  Starting                 8s    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7s    kubelet          Node newest-cni-848035 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7s    kubelet          Node newest-cni-848035 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7s    kubelet          Node newest-cni-848035 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3s    node-controller  Node newest-cni-848035 event: Registered Node newest-cni-848035 in Controller
	
	
	==> dmesg <==
	[  +0.102617] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027869] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.458770] kauditd_printk_skb: 47 callbacks suppressed
	[Oct19 16:23] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.054620] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023908] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023878] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023848] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023943] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +2.047764] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +4.031517] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +8.127172] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[ +16.382276] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[Oct19 16:24] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	
	
	==> etcd [753bde07a315c25faa653560c9d9e65d2a182d7361840378cd376c40cbf89865] <==
	{"level":"warn","ts":"2025-10-19T17:16:17.819091Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:17.828719Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:17.836225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:17.854578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:17.857644Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:17.864778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:17.872213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:17.879684Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:17.886831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:17.892957Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:17.900222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:17.907364Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:17.914533Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:17.922195Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:17.929593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:17.937509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:17.945720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:17.953634Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:17.961653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:17.969719Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:17.977600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:17.991588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:17.998523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:18.006096Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:18.061196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45420","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 17:16:28 up 58 min,  0 user,  load average: 3.13, 2.90, 1.81
	Linux newest-cni-848035 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [745b5c1e450a9cc2e54b5d9f6602face0e7a97b7785fffaef4c21352c828aaf7] <==
	I1019 17:16:26.861950       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 17:16:26.862246       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1019 17:16:26.862403       1 main.go:148] setting mtu 1500 for CNI 
	I1019 17:16:26.862420       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 17:16:26.862442       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T17:16:27Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 17:16:27.156794       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 17:16:27.156836       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 17:16:27.156849       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 17:16:27.156995       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1019 17:16:27.458456       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 17:16:27.458513       1 metrics.go:72] Registering metrics
	I1019 17:16:27.458661       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [b47fd67f6a5ed9c785da7e1ed1e7c311dfe554bde7c7ecdb6fdd47d193494aaa] <==
	I1019 17:16:18.522219       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1019 17:16:18.523587       1 controller.go:667] quota admission added evaluator for: namespaces
	I1019 17:16:18.524332       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1019 17:16:18.527664       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 17:16:18.527753       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1019 17:16:18.533471       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 17:16:18.533713       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1019 17:16:18.548374       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 17:16:19.426552       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1019 17:16:19.430616       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1019 17:16:19.430636       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 17:16:19.965609       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 17:16:20.009324       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 17:16:20.132183       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1019 17:16:20.139219       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1019 17:16:20.140412       1 controller.go:667] quota admission added evaluator for: endpoints
	I1019 17:16:20.147232       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 17:16:20.468445       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 17:16:21.061810       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1019 17:16:21.075033       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1019 17:16:21.083934       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1019 17:16:26.322003       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1019 17:16:26.474665       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 17:16:26.479063       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 17:16:26.570269       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [b4be8e69a94003e9da8c1675b44e4e70e6401758fc2e924fffbea73b419c1689] <==
	I1019 17:16:25.434692       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1019 17:16:25.441828       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1019 17:16:25.467741       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1019 17:16:25.467771       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1019 17:16:25.467780       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1019 17:16:25.467782       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1019 17:16:25.467811       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1019 17:16:25.467831       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1019 17:16:25.467746       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1019 17:16:25.467867       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1019 17:16:25.467877       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1019 17:16:25.468027       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-848035"
	I1019 17:16:25.468122       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1019 17:16:25.468150       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1019 17:16:25.469107       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1019 17:16:25.469128       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1019 17:16:25.469184       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1019 17:16:25.469289       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1019 17:16:25.469317       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1019 17:16:25.471753       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1019 17:16:25.471799       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 17:16:25.472936       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1019 17:16:25.472964       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1019 17:16:25.479206       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1019 17:16:25.491739       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [4785476d50d682eb41bd8da287e4ae04f133c7ab692ffdf670081ce58c7d4d75] <==
	I1019 17:16:26.750700       1 server_linux.go:53] "Using iptables proxy"
	I1019 17:16:26.806841       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 17:16:26.907194       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 17:16:26.907248       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1019 17:16:26.907364       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 17:16:26.926376       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 17:16:26.926444       1 server_linux.go:132] "Using iptables Proxier"
	I1019 17:16:26.932140       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 17:16:26.932627       1 server.go:527] "Version info" version="v1.34.1"
	I1019 17:16:26.932653       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:16:26.934104       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 17:16:26.934123       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 17:16:26.934170       1 config.go:200] "Starting service config controller"
	I1019 17:16:26.934177       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 17:16:26.934192       1 config.go:106] "Starting endpoint slice config controller"
	I1019 17:16:26.934197       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 17:16:26.934483       1 config.go:309] "Starting node config controller"
	I1019 17:16:26.934559       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 17:16:26.934572       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 17:16:27.034927       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1019 17:16:27.034976       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 17:16:27.034986       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [759575d96a14528231bfe8d2521d6ae4de72bdfc51b3f7d9a498b6e31eb6a213] <==
	E1019 17:16:18.480925       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1019 17:16:18.481000       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1019 17:16:18.481481       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1019 17:16:18.481483       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1019 17:16:18.481560       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1019 17:16:18.481688       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1019 17:16:18.481753       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 17:16:18.481784       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1019 17:16:18.481833       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1019 17:16:18.481887       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1019 17:16:18.481875       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1019 17:16:18.481894       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1019 17:16:18.481862       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1019 17:16:18.482018       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1019 17:16:18.482018       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1019 17:16:18.482024       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1019 17:16:18.482101       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1019 17:16:19.321013       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1019 17:16:19.335396       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1019 17:16:19.376806       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1019 17:16:19.449361       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1019 17:16:19.580823       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1019 17:16:19.611221       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1019 17:16:19.970281       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1019 17:16:22.879564       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 19 17:16:21 newest-cni-848035 kubelet[1312]: I1019 17:16:21.107326    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ba242fca90188dfe9986f19b94416df8-kubeconfig\") pod \"kube-scheduler-newest-cni-848035\" (UID: \"ba242fca90188dfe9986f19b94416df8\") " pod="kube-system/kube-scheduler-newest-cni-848035"
	Oct 19 17:16:21 newest-cni-848035 kubelet[1312]: I1019 17:16:21.895291    1312 apiserver.go:52] "Watching apiserver"
	Oct 19 17:16:21 newest-cni-848035 kubelet[1312]: I1019 17:16:21.906227    1312 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 19 17:16:21 newest-cni-848035 kubelet[1312]: I1019 17:16:21.943960    1312 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-848035"
	Oct 19 17:16:21 newest-cni-848035 kubelet[1312]: I1019 17:16:21.944199    1312 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-848035"
	Oct 19 17:16:21 newest-cni-848035 kubelet[1312]: I1019 17:16:21.944448    1312 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-848035"
	Oct 19 17:16:21 newest-cni-848035 kubelet[1312]: E1019 17:16:21.954404    1312 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-848035\" already exists" pod="kube-system/kube-scheduler-newest-cni-848035"
	Oct 19 17:16:21 newest-cni-848035 kubelet[1312]: E1019 17:16:21.955774    1312 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-848035\" already exists" pod="kube-system/kube-apiserver-newest-cni-848035"
	Oct 19 17:16:21 newest-cni-848035 kubelet[1312]: E1019 17:16:21.957031    1312 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-848035\" already exists" pod="kube-system/etcd-newest-cni-848035"
	Oct 19 17:16:21 newest-cni-848035 kubelet[1312]: I1019 17:16:21.975174    1312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-848035" podStartSLOduration=0.975149136 podStartE2EDuration="975.149136ms" podCreationTimestamp="2025-10-19 17:16:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:16:21.965465057 +0000 UTC m=+1.141881805" watchObservedRunningTime="2025-10-19 17:16:21.975149136 +0000 UTC m=+1.151565897"
	Oct 19 17:16:21 newest-cni-848035 kubelet[1312]: I1019 17:16:21.985417    1312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-848035" podStartSLOduration=0.985395033 podStartE2EDuration="985.395033ms" podCreationTimestamp="2025-10-19 17:16:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:16:21.975363356 +0000 UTC m=+1.151780095" watchObservedRunningTime="2025-10-19 17:16:21.985395033 +0000 UTC m=+1.161811782"
	Oct 19 17:16:21 newest-cni-848035 kubelet[1312]: I1019 17:16:21.996157    1312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-848035" podStartSLOduration=2.9961378 podStartE2EDuration="2.9961378s" podCreationTimestamp="2025-10-19 17:16:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:16:21.996119101 +0000 UTC m=+1.172535852" watchObservedRunningTime="2025-10-19 17:16:21.9961378 +0000 UTC m=+1.172554547"
	Oct 19 17:16:21 newest-cni-848035 kubelet[1312]: I1019 17:16:21.996429    1312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-848035" podStartSLOduration=0.99641033 podStartE2EDuration="996.41033ms" podCreationTimestamp="2025-10-19 17:16:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:16:21.985570204 +0000 UTC m=+1.161986932" watchObservedRunningTime="2025-10-19 17:16:21.99641033 +0000 UTC m=+1.172827078"
	Oct 19 17:16:25 newest-cni-848035 kubelet[1312]: I1019 17:16:25.459631    1312 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 19 17:16:25 newest-cni-848035 kubelet[1312]: I1019 17:16:25.460385    1312 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 19 17:16:26 newest-cni-848035 kubelet[1312]: I1019 17:16:26.446384    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3371a4c2-e7be-4f7c-9e77-69cce40a6458-xtables-lock\") pod \"kindnet-cldtb\" (UID: \"3371a4c2-e7be-4f7c-9e77-69cce40a6458\") " pod="kube-system/kindnet-cldtb"
	Oct 19 17:16:26 newest-cni-848035 kubelet[1312]: I1019 17:16:26.446445    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f332f5bc-a940-414b-816d-fe262c303b5a-xtables-lock\") pod \"kube-proxy-4xgrb\" (UID: \"f332f5bc-a940-414b-816d-fe262c303b5a\") " pod="kube-system/kube-proxy-4xgrb"
	Oct 19 17:16:26 newest-cni-848035 kubelet[1312]: I1019 17:16:26.446521    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2slnc\" (UniqueName: \"kubernetes.io/projected/f332f5bc-a940-414b-816d-fe262c303b5a-kube-api-access-2slnc\") pod \"kube-proxy-4xgrb\" (UID: \"f332f5bc-a940-414b-816d-fe262c303b5a\") " pod="kube-system/kube-proxy-4xgrb"
	Oct 19 17:16:26 newest-cni-848035 kubelet[1312]: I1019 17:16:26.446574    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/3371a4c2-e7be-4f7c-9e77-69cce40a6458-cni-cfg\") pod \"kindnet-cldtb\" (UID: \"3371a4c2-e7be-4f7c-9e77-69cce40a6458\") " pod="kube-system/kindnet-cldtb"
	Oct 19 17:16:26 newest-cni-848035 kubelet[1312]: I1019 17:16:26.446594    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqwc2\" (UniqueName: \"kubernetes.io/projected/3371a4c2-e7be-4f7c-9e77-69cce40a6458-kube-api-access-tqwc2\") pod \"kindnet-cldtb\" (UID: \"3371a4c2-e7be-4f7c-9e77-69cce40a6458\") " pod="kube-system/kindnet-cldtb"
	Oct 19 17:16:26 newest-cni-848035 kubelet[1312]: I1019 17:16:26.446617    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f332f5bc-a940-414b-816d-fe262c303b5a-kube-proxy\") pod \"kube-proxy-4xgrb\" (UID: \"f332f5bc-a940-414b-816d-fe262c303b5a\") " pod="kube-system/kube-proxy-4xgrb"
	Oct 19 17:16:26 newest-cni-848035 kubelet[1312]: I1019 17:16:26.446640    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f332f5bc-a940-414b-816d-fe262c303b5a-lib-modules\") pod \"kube-proxy-4xgrb\" (UID: \"f332f5bc-a940-414b-816d-fe262c303b5a\") " pod="kube-system/kube-proxy-4xgrb"
	Oct 19 17:16:26 newest-cni-848035 kubelet[1312]: I1019 17:16:26.446683    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3371a4c2-e7be-4f7c-9e77-69cce40a6458-lib-modules\") pod \"kindnet-cldtb\" (UID: \"3371a4c2-e7be-4f7c-9e77-69cce40a6458\") " pod="kube-system/kindnet-cldtb"
	Oct 19 17:16:27 newest-cni-848035 kubelet[1312]: I1019 17:16:27.067403    1312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-cldtb" podStartSLOduration=1.067381732 podStartE2EDuration="1.067381732s" podCreationTimestamp="2025-10-19 17:16:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:16:27.067156363 +0000 UTC m=+6.243573117" watchObservedRunningTime="2025-10-19 17:16:27.067381732 +0000 UTC m=+6.243798479"
	Oct 19 17:16:27 newest-cni-848035 kubelet[1312]: I1019 17:16:27.087651    1312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4xgrb" podStartSLOduration=1.087630107 podStartE2EDuration="1.087630107s" podCreationTimestamp="2025-10-19 17:16:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:16:27.087536378 +0000 UTC m=+6.263953126" watchObservedRunningTime="2025-10-19 17:16:27.087630107 +0000 UTC m=+6.264046854"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-848035 -n newest-cni-848035
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-848035 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-4r958 storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-848035 describe pod coredns-66bc5c9577-4r958 storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-848035 describe pod coredns-66bc5c9577-4r958 storage-provisioner: exit status 1 (61.449879ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-4r958" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-848035 describe pod coredns-66bc5c9577-4r958 storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.98s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (6.94s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-848035 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-848035 --alsologtostderr -v=1: exit status 80 (2.621183739s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-848035 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 17:16:42.984297  276311 out.go:360] Setting OutFile to fd 1 ...
	I1019 17:16:42.984444  276311 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:16:42.984450  276311 out.go:374] Setting ErrFile to fd 2...
	I1019 17:16:42.984456  276311 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:16:42.984699  276311 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3731/.minikube/bin
	I1019 17:16:42.984975  276311 out.go:368] Setting JSON to false
	I1019 17:16:42.985024  276311 mustload.go:66] Loading cluster: newest-cni-848035
	I1019 17:16:42.985438  276311 config.go:182] Loaded profile config "newest-cni-848035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:16:42.985942  276311 cli_runner.go:164] Run: docker container inspect newest-cni-848035 --format={{.State.Status}}
	I1019 17:16:43.010567  276311 host.go:66] Checking if "newest-cni-848035" exists ...
	I1019 17:16:43.011041  276311 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:16:43.105026  276311 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:87 SystemTime:2025-10-19 17:16:43.088337538 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 17:16:43.105907  276311 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-848035 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1019 17:16:43.144454  276311 out.go:179] * Pausing node newest-cni-848035 ... 
	I1019 17:16:43.179786  276311 host.go:66] Checking if "newest-cni-848035" exists ...
	I1019 17:16:43.180187  276311 ssh_runner.go:195] Run: systemctl --version
	I1019 17:16:43.180251  276311 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-848035
	I1019 17:16:43.208027  276311 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/newest-cni-848035/id_rsa Username:docker}
	I1019 17:16:43.319910  276311 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:16:43.334040  276311 pause.go:52] kubelet running: true
	I1019 17:16:43.334115  276311 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 17:16:43.532601  276311 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 17:16:43.532690  276311 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 17:16:43.609525  276311 cri.go:89] found id: "bbcdda98e3574c313aedf6fad5937b0627c232e0cdf3ef130c9d87b28d06465f"
	I1019 17:16:43.609550  276311 cri.go:89] found id: "64839406f6a583b3ec2976b1c494c642f16e35e9dc81ce7b11da661dfdc9da24"
	I1019 17:16:43.609555  276311 cri.go:89] found id: "7161f4ea7f31214468dd438ccf92489be711cdfc8d6872eaa7921269b21b986f"
	I1019 17:16:43.609558  276311 cri.go:89] found id: "82ba1cda1f516989069b3e38d150e44948c2f2be79b66e33581f811e725c1136"
	I1019 17:16:43.609560  276311 cri.go:89] found id: "1c8d739ff68a706977ec10a4d83ed670ded08b67ebc1e618d401e2ecdfa2191e"
	I1019 17:16:43.609564  276311 cri.go:89] found id: "82a653d8de9363d72328ad5104900829cd8f26df51d681e55e0be8cc95ec3727"
	I1019 17:16:43.609566  276311 cri.go:89] found id: ""
	I1019 17:16:43.609602  276311 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 17:16:43.622943  276311 retry.go:31] will retry after 233.227159ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:16:43Z" level=error msg="open /run/runc: no such file or directory"
	I1019 17:16:43.857245  276311 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:16:43.874733  276311 pause.go:52] kubelet running: false
	I1019 17:16:43.874796  276311 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 17:16:44.048623  276311 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 17:16:44.048727  276311 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 17:16:44.137829  276311 cri.go:89] found id: "bbcdda98e3574c313aedf6fad5937b0627c232e0cdf3ef130c9d87b28d06465f"
	I1019 17:16:44.137864  276311 cri.go:89] found id: "64839406f6a583b3ec2976b1c494c642f16e35e9dc81ce7b11da661dfdc9da24"
	I1019 17:16:44.137870  276311 cri.go:89] found id: "7161f4ea7f31214468dd438ccf92489be711cdfc8d6872eaa7921269b21b986f"
	I1019 17:16:44.137875  276311 cri.go:89] found id: "82ba1cda1f516989069b3e38d150e44948c2f2be79b66e33581f811e725c1136"
	I1019 17:16:44.137879  276311 cri.go:89] found id: "1c8d739ff68a706977ec10a4d83ed670ded08b67ebc1e618d401e2ecdfa2191e"
	I1019 17:16:44.137884  276311 cri.go:89] found id: "82a653d8de9363d72328ad5104900829cd8f26df51d681e55e0be8cc95ec3727"
	I1019 17:16:44.137888  276311 cri.go:89] found id: ""
	I1019 17:16:44.137933  276311 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 17:16:44.153782  276311 retry.go:31] will retry after 334.560108ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:16:44Z" level=error msg="open /run/runc: no such file or directory"
	I1019 17:16:44.489266  276311 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:16:44.512871  276311 pause.go:52] kubelet running: false
	I1019 17:16:44.512938  276311 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 17:16:44.685867  276311 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 17:16:44.685966  276311 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 17:16:44.780636  276311 cri.go:89] found id: "bbcdda98e3574c313aedf6fad5937b0627c232e0cdf3ef130c9d87b28d06465f"
	I1019 17:16:44.780666  276311 cri.go:89] found id: "64839406f6a583b3ec2976b1c494c642f16e35e9dc81ce7b11da661dfdc9da24"
	I1019 17:16:44.780683  276311 cri.go:89] found id: "7161f4ea7f31214468dd438ccf92489be711cdfc8d6872eaa7921269b21b986f"
	I1019 17:16:44.780689  276311 cri.go:89] found id: "82ba1cda1f516989069b3e38d150e44948c2f2be79b66e33581f811e725c1136"
	I1019 17:16:44.780693  276311 cri.go:89] found id: "1c8d739ff68a706977ec10a4d83ed670ded08b67ebc1e618d401e2ecdfa2191e"
	I1019 17:16:44.780698  276311 cri.go:89] found id: "82a653d8de9363d72328ad5104900829cd8f26df51d681e55e0be8cc95ec3727"
	I1019 17:16:44.780702  276311 cri.go:89] found id: ""
	I1019 17:16:44.780748  276311 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 17:16:44.795878  276311 retry.go:31] will retry after 456.715856ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:16:44Z" level=error msg="open /run/runc: no such file or directory"
	I1019 17:16:45.253287  276311 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:16:45.266866  276311 pause.go:52] kubelet running: false
	I1019 17:16:45.266927  276311 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 17:16:45.385656  276311 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 17:16:45.385796  276311 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 17:16:45.454322  276311 cri.go:89] found id: "bbcdda98e3574c313aedf6fad5937b0627c232e0cdf3ef130c9d87b28d06465f"
	I1019 17:16:45.454350  276311 cri.go:89] found id: "64839406f6a583b3ec2976b1c494c642f16e35e9dc81ce7b11da661dfdc9da24"
	I1019 17:16:45.454356  276311 cri.go:89] found id: "7161f4ea7f31214468dd438ccf92489be711cdfc8d6872eaa7921269b21b986f"
	I1019 17:16:45.454361  276311 cri.go:89] found id: "82ba1cda1f516989069b3e38d150e44948c2f2be79b66e33581f811e725c1136"
	I1019 17:16:45.454379  276311 cri.go:89] found id: "1c8d739ff68a706977ec10a4d83ed670ded08b67ebc1e618d401e2ecdfa2191e"
	I1019 17:16:45.454384  276311 cri.go:89] found id: "82a653d8de9363d72328ad5104900829cd8f26df51d681e55e0be8cc95ec3727"
	I1019 17:16:45.454388  276311 cri.go:89] found id: ""
	I1019 17:16:45.454431  276311 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 17:16:45.502950  276311 out.go:203] 
	W1019 17:16:45.509907  276311 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:16:45Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:16:45Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 17:16:45.509935  276311 out.go:285] * 
	* 
	W1019 17:16:45.513873  276311 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 17:16:45.519676  276311 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-848035 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-848035
helpers_test.go:243: (dbg) docker inspect newest-cni-848035:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d4878977e53a743ea778cb864c7df411e59f7f981a676f0796cbd728924f7077",
	        "Created": "2025-10-19T17:16:04.614467412Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 272742,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T17:16:32.061755844Z",
	            "FinishedAt": "2025-10-19T17:16:31.105026889Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/d4878977e53a743ea778cb864c7df411e59f7f981a676f0796cbd728924f7077/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d4878977e53a743ea778cb864c7df411e59f7f981a676f0796cbd728924f7077/hostname",
	        "HostsPath": "/var/lib/docker/containers/d4878977e53a743ea778cb864c7df411e59f7f981a676f0796cbd728924f7077/hosts",
	        "LogPath": "/var/lib/docker/containers/d4878977e53a743ea778cb864c7df411e59f7f981a676f0796cbd728924f7077/d4878977e53a743ea778cb864c7df411e59f7f981a676f0796cbd728924f7077-json.log",
	        "Name": "/newest-cni-848035",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-848035:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-848035",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d4878977e53a743ea778cb864c7df411e59f7f981a676f0796cbd728924f7077",
	                "LowerDir": "/var/lib/docker/overlay2/0f8e59503f105b74cc854aa5854cdd73481fc5e056e7a00d02958c6db21e7382-init/diff:/var/lib/docker/overlay2/0516a6de822f96a429e000bb6523437ca93a66a6aecaba561c6abf45d0d1defe/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0f8e59503f105b74cc854aa5854cdd73481fc5e056e7a00d02958c6db21e7382/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0f8e59503f105b74cc854aa5854cdd73481fc5e056e7a00d02958c6db21e7382/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0f8e59503f105b74cc854aa5854cdd73481fc5e056e7a00d02958c6db21e7382/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-848035",
	                "Source": "/var/lib/docker/volumes/newest-cni-848035/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-848035",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-848035",
	                "name.minikube.sigs.k8s.io": "newest-cni-848035",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1f60258a544ed3a3a5df9cea272b23320c3a4cf51c4f4efd7453932ba92f6a4a",
	            "SandboxKey": "/var/run/docker/netns/1f60258a544e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-848035": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "96:cf:8e:6c:be:36",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bd85ff71a8849a849b2f5448c4af9d5d2d209e0b42263ef0a6ae677b20846d2a",
	                    "EndpointID": "302dbc35bc854d4a8951a7868a9cfaf8fbf037e00c1f20a37e833b513d17e6d5",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-848035",
	                        "d4878977e53a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-848035 -n newest-cni-848035
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-848035 -n newest-cni-848035: exit status 2 (332.034146ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-848035 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-848035 logs -n 25: (1.069399242s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ image   │ old-k8s-version-904967 image list --format=json                                                                                                                                                                                               │ old-k8s-version-904967       │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ pause   │ -p old-k8s-version-904967 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-904967       │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │                     │
	│ delete  │ -p old-k8s-version-904967                                                                                                                                                                                                                     │ old-k8s-version-904967       │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ delete  │ -p old-k8s-version-904967                                                                                                                                                                                                                     │ old-k8s-version-904967       │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ delete  │ -p disable-driver-mounts-858297                                                                                                                                                                                                               │ disable-driver-mounts-858297 │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ start   │ -p default-k8s-diff-port-663015 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-663015 │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:16 UTC │
	│ image   │ no-preload-806996 image list --format=json                                                                                                                                                                                                    │ no-preload-806996            │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ pause   │ -p no-preload-806996 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-806996            │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │                     │
	│ delete  │ -p no-preload-806996                                                                                                                                                                                                                          │ no-preload-806996            │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ delete  │ -p no-preload-806996                                                                                                                                                                                                                          │ no-preload-806996            │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ start   │ -p newest-cni-848035 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-848035            │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:16 UTC │
	│ addons  │ enable metrics-server -p embed-certs-090139 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-090139           │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ stop    │ -p embed-certs-090139 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-090139           │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:16 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-663015 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-663015 │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-663015 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-663015 │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:16 UTC │
	│ addons  │ enable dashboard -p embed-certs-090139 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-090139           │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:16 UTC │
	│ start   │ -p embed-certs-090139 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-090139           │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-848035 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-848035            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ stop    │ -p newest-cni-848035 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-848035            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:16 UTC │
	│ addons  │ enable dashboard -p newest-cni-848035 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-848035            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:16 UTC │
	│ start   │ -p newest-cni-848035 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-848035            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:16 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-663015 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-663015 │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:16 UTC │
	│ start   │ -p default-k8s-diff-port-663015 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-663015 │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ image   │ newest-cni-848035 image list --format=json                                                                                                                                                                                                    │ newest-cni-848035            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:16 UTC │
	│ pause   │ -p newest-cni-848035 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-848035            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 17:16:38
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 17:16:38.258301  274481 out.go:360] Setting OutFile to fd 1 ...
	I1019 17:16:38.258615  274481 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:16:38.258628  274481 out.go:374] Setting ErrFile to fd 2...
	I1019 17:16:38.258635  274481 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:16:38.258873  274481 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3731/.minikube/bin
	I1019 17:16:38.259386  274481 out.go:368] Setting JSON to false
	I1019 17:16:38.260604  274481 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3544,"bootTime":1760890654,"procs":341,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 17:16:38.260696  274481 start.go:143] virtualization: kvm guest
	I1019 17:16:38.262889  274481 out.go:179] * [default-k8s-diff-port-663015] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 17:16:38.264756  274481 notify.go:221] Checking for updates...
	I1019 17:16:38.264784  274481 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 17:16:38.266508  274481 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 17:16:38.267722  274481 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-3731/kubeconfig
	I1019 17:16:38.269377  274481 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-3731/.minikube
	I1019 17:16:38.271505  274481 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 17:16:38.273134  274481 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 17:16:38.274964  274481 config.go:182] Loaded profile config "default-k8s-diff-port-663015": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:16:38.275647  274481 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 17:16:38.303784  274481 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1019 17:16:38.303857  274481 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:16:38.368133  274481 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-19 17:16:38.357874865 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 17:16:38.368275  274481 docker.go:319] overlay module found
	I1019 17:16:38.371129  274481 out.go:179] * Using the docker driver based on existing profile
	I1019 17:16:38.372655  274481 start.go:309] selected driver: docker
	I1019 17:16:38.372674  274481 start.go:930] validating driver "docker" against &{Name:default-k8s-diff-port-663015 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-663015 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:16:38.372778  274481 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 17:16:38.373440  274481 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:16:38.440582  274481 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-19 17:16:38.429155941 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 17:16:38.440968  274481 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 17:16:38.441003  274481 cni.go:84] Creating CNI manager for ""
	I1019 17:16:38.441063  274481 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:16:38.441127  274481 start.go:353] cluster config:
	{Name:default-k8s-diff-port-663015 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-663015 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:16:38.443264  274481 out.go:179] * Starting "default-k8s-diff-port-663015" primary control-plane node in "default-k8s-diff-port-663015" cluster
	I1019 17:16:38.444561  274481 cache.go:124] Beginning downloading kic base image for docker with crio
	I1019 17:16:38.445931  274481 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 17:16:38.447168  274481 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:16:38.447219  274481 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-3731/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1019 17:16:38.447232  274481 cache.go:59] Caching tarball of preloaded images
	I1019 17:16:38.447331  274481 preload.go:233] Found /home/jenkins/minikube-integration/21683-3731/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1019 17:16:38.447342  274481 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 17:16:38.447483  274481 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/config.json ...
	I1019 17:16:38.447738  274481 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 17:16:38.475723  274481 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 17:16:38.475744  274481 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 17:16:38.475767  274481 cache.go:233] Successfully downloaded all kic artifacts
	I1019 17:16:38.475796  274481 start.go:360] acquireMachinesLock for default-k8s-diff-port-663015: {Name:mkc3b977c4f353256fa3816417a52809b235a030 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 17:16:38.475861  274481 start.go:364] duration metric: took 43.597µs to acquireMachinesLock for "default-k8s-diff-port-663015"
	I1019 17:16:38.475884  274481 start.go:96] Skipping create...Using existing machine configuration
	I1019 17:16:38.475912  274481 fix.go:54] fixHost starting: 
	I1019 17:16:38.476205  274481 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-663015 --format={{.State.Status}}
	I1019 17:16:38.495784  274481 fix.go:112] recreateIfNeeded on default-k8s-diff-port-663015: state=Stopped err=<nil>
	W1019 17:16:38.495815  274481 fix.go:138] unexpected machine state, will restart: <nil>
	I1019 17:16:33.541492  219832 cri.go:89] found id: ""
	I1019 17:16:33.541549  219832 logs.go:282] 0 containers: []
	W1019 17:16:33.541561  219832 logs.go:284] No container was found matching "coredns"
	I1019 17:16:33.541570  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1019 17:16:33.541691  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1019 17:16:33.576629  219832 cri.go:89] found id: "409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:16:33.576656  219832 cri.go:89] found id: ""
	I1019 17:16:33.576675  219832 logs.go:282] 1 containers: [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f]
	I1019 17:16:33.576732  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:16:33.580857  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1019 17:16:33.580928  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1019 17:16:33.612243  219832 cri.go:89] found id: ""
	I1019 17:16:33.612270  219832 logs.go:282] 0 containers: []
	W1019 17:16:33.612280  219832 logs.go:284] No container was found matching "kube-proxy"
	I1019 17:16:33.612289  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1019 17:16:33.612354  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1019 17:16:33.646219  219832 cri.go:89] found id: "44a80a801bf4fae14b89098891aceb014a4308a5880e6bb47d1db7c41321104d"
	I1019 17:16:33.646244  219832 cri.go:89] found id: ""
	I1019 17:16:33.646254  219832 logs.go:282] 1 containers: [44a80a801bf4fae14b89098891aceb014a4308a5880e6bb47d1db7c41321104d]
	I1019 17:16:33.646315  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:16:33.650533  219832 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1019 17:16:33.650597  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1019 17:16:33.678204  219832 cri.go:89] found id: ""
	I1019 17:16:33.678241  219832 logs.go:282] 0 containers: []
	W1019 17:16:33.678253  219832 logs.go:284] No container was found matching "kindnet"
	I1019 17:16:33.678261  219832 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1019 17:16:33.678316  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1019 17:16:33.707943  219832 cri.go:89] found id: ""
	I1019 17:16:33.707970  219832 logs.go:282] 0 containers: []
	W1019 17:16:33.707979  219832 logs.go:284] No container was found matching "storage-provisioner"
	I1019 17:16:33.707990  219832 logs.go:123] Gathering logs for kubelet ...
	I1019 17:16:33.708004  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1019 17:16:33.818512  219832 logs.go:123] Gathering logs for dmesg ...
	I1019 17:16:33.818552  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1019 17:16:33.835274  219832 logs.go:123] Gathering logs for describe nodes ...
	I1019 17:16:33.835301  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1019 17:16:33.906762  219832 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1019 17:16:33.906788  219832 logs.go:123] Gathering logs for kube-apiserver [55181ab12a75f0828e8a499b545960e5dfe82c07024572a8d9b4566b0da1fba4] ...
	I1019 17:16:33.906803  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 55181ab12a75f0828e8a499b545960e5dfe82c07024572a8d9b4566b0da1fba4"
	I1019 17:16:33.946207  219832 logs.go:123] Gathering logs for kube-scheduler [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f] ...
	I1019 17:16:33.946235  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:16:34.017859  219832 logs.go:123] Gathering logs for kube-controller-manager [44a80a801bf4fae14b89098891aceb014a4308a5880e6bb47d1db7c41321104d] ...
	I1019 17:16:34.017905  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 44a80a801bf4fae14b89098891aceb014a4308a5880e6bb47d1db7c41321104d"
	I1019 17:16:34.056158  219832 logs.go:123] Gathering logs for CRI-O ...
	I1019 17:16:34.056189  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1019 17:16:34.109592  219832 logs.go:123] Gathering logs for container status ...
	I1019 17:16:34.109642  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1019 17:16:36.645341  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:16:36.645714  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1019 17:16:36.645761  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1019 17:16:36.645814  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1019 17:16:36.672947  219832 cri.go:89] found id: "55181ab12a75f0828e8a499b545960e5dfe82c07024572a8d9b4566b0da1fba4"
	I1019 17:16:36.672969  219832 cri.go:89] found id: ""
	I1019 17:16:36.672977  219832 logs.go:282] 1 containers: [55181ab12a75f0828e8a499b545960e5dfe82c07024572a8d9b4566b0da1fba4]
	I1019 17:16:36.673036  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:16:36.677047  219832 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1019 17:16:36.677129  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1019 17:16:36.705274  219832 cri.go:89] found id: ""
	I1019 17:16:36.705300  219832 logs.go:282] 0 containers: []
	W1019 17:16:36.705311  219832 logs.go:284] No container was found matching "etcd"
	I1019 17:16:36.705318  219832 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1019 17:16:36.705378  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1019 17:16:36.733936  219832 cri.go:89] found id: ""
	I1019 17:16:36.733960  219832 logs.go:282] 0 containers: []
	W1019 17:16:36.733967  219832 logs.go:284] No container was found matching "coredns"
	I1019 17:16:36.733972  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1019 17:16:36.734016  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1019 17:16:36.760255  219832 cri.go:89] found id: "409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:16:36.760276  219832 cri.go:89] found id: ""
	I1019 17:16:36.760284  219832 logs.go:282] 1 containers: [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f]
	I1019 17:16:36.760339  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:16:36.764753  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1019 17:16:36.764821  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1019 17:16:36.805419  219832 cri.go:89] found id: ""
	I1019 17:16:36.805449  219832 logs.go:282] 0 containers: []
	W1019 17:16:36.805461  219832 logs.go:284] No container was found matching "kube-proxy"
	I1019 17:16:36.805472  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1019 17:16:36.805531  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1019 17:16:36.835323  219832 cri.go:89] found id: "44a80a801bf4fae14b89098891aceb014a4308a5880e6bb47d1db7c41321104d"
	I1019 17:16:36.835344  219832 cri.go:89] found id: ""
	I1019 17:16:36.835354  219832 logs.go:282] 1 containers: [44a80a801bf4fae14b89098891aceb014a4308a5880e6bb47d1db7c41321104d]
	I1019 17:16:36.835415  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:16:36.839353  219832 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1019 17:16:36.839424  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1019 17:16:36.866008  219832 cri.go:89] found id: ""
	I1019 17:16:36.866034  219832 logs.go:282] 0 containers: []
	W1019 17:16:36.866045  219832 logs.go:284] No container was found matching "kindnet"
	I1019 17:16:36.866052  219832 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1019 17:16:36.866130  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1019 17:16:36.898201  219832 cri.go:89] found id: ""
	I1019 17:16:36.898224  219832 logs.go:282] 0 containers: []
	W1019 17:16:36.898243  219832 logs.go:284] No container was found matching "storage-provisioner"
	I1019 17:16:36.898254  219832 logs.go:123] Gathering logs for container status ...
	I1019 17:16:36.898267  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1019 17:16:36.931659  219832 logs.go:123] Gathering logs for kubelet ...
	I1019 17:16:36.931693  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1019 17:16:37.032445  219832 logs.go:123] Gathering logs for dmesg ...
	I1019 17:16:37.032476  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1019 17:16:37.046900  219832 logs.go:123] Gathering logs for describe nodes ...
	I1019 17:16:37.046925  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1019 17:16:37.106332  219832 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1019 17:16:37.106353  219832 logs.go:123] Gathering logs for kube-apiserver [55181ab12a75f0828e8a499b545960e5dfe82c07024572a8d9b4566b0da1fba4] ...
	I1019 17:16:37.106370  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 55181ab12a75f0828e8a499b545960e5dfe82c07024572a8d9b4566b0da1fba4"
	I1019 17:16:37.141112  219832 logs.go:123] Gathering logs for kube-scheduler [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f] ...
	I1019 17:16:37.141142  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:16:37.198221  219832 logs.go:123] Gathering logs for kube-controller-manager [44a80a801bf4fae14b89098891aceb014a4308a5880e6bb47d1db7c41321104d] ...
	I1019 17:16:37.198254  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 44a80a801bf4fae14b89098891aceb014a4308a5880e6bb47d1db7c41321104d"
	I1019 17:16:37.235503  219832 logs.go:123] Gathering logs for CRI-O ...
	I1019 17:16:37.235527  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1019 17:16:38.046425  272363 kubeadm.go:884] updating cluster {Name:newest-cni-848035 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-848035 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 17:16:38.046594  272363 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:16:38.046679  272363 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:16:38.082448  272363 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:16:38.082473  272363 crio.go:433] Images already preloaded, skipping extraction
	I1019 17:16:38.082515  272363 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:16:38.112614  272363 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:16:38.112632  272363 cache_images.go:86] Images are preloaded, skipping loading
	I1019 17:16:38.112639  272363 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1019 17:16:38.112732  272363 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-848035 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-848035 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 17:16:38.112803  272363 ssh_runner.go:195] Run: crio config
	I1019 17:16:38.164220  272363 cni.go:84] Creating CNI manager for ""
	I1019 17:16:38.164247  272363 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:16:38.164265  272363 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1019 17:16:38.164294  272363 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-848035 NodeName:newest-cni-848035 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 17:16:38.164457  272363 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-848035"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 17:16:38.164529  272363 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 17:16:38.173498  272363 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 17:16:38.173549  272363 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 17:16:38.181591  272363 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1019 17:16:38.196205  272363 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 17:16:38.210033  272363 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1019 17:16:38.224305  272363 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1019 17:16:38.229006  272363 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:16:38.241325  272363 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:16:38.343442  272363 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:16:38.366970  272363 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/newest-cni-848035 for IP: 192.168.76.2
	I1019 17:16:38.367003  272363 certs.go:195] generating shared ca certs ...
	I1019 17:16:38.367025  272363 certs.go:227] acquiring lock for ca certs: {Name:mk4c4a2dfd94a54e3626e99fce4b6f5183eeaf4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:16:38.367205  272363 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-3731/.minikube/ca.key
	I1019 17:16:38.367262  272363 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-3731/.minikube/proxy-client-ca.key
	I1019 17:16:38.367275  272363 certs.go:257] generating profile certs ...
	I1019 17:16:38.367355  272363 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/newest-cni-848035/client.key
	I1019 17:16:38.367408  272363 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/newest-cni-848035/apiserver.key.facc7e69
	I1019 17:16:38.367448  272363 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/newest-cni-848035/proxy-client.key
	I1019 17:16:38.367603  272363 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/7228.pem (1338 bytes)
	W1019 17:16:38.367649  272363 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-3731/.minikube/certs/7228_empty.pem, impossibly tiny 0 bytes
	I1019 17:16:38.367663  272363 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca-key.pem (1675 bytes)
	I1019 17:16:38.367689  272363 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem (1082 bytes)
	I1019 17:16:38.367717  272363 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/cert.pem (1123 bytes)
	I1019 17:16:38.367750  272363 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/key.pem (1679 bytes)
	I1019 17:16:38.367794  272363 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/files/etc/ssl/certs/72282.pem (1708 bytes)
	I1019 17:16:38.368555  272363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 17:16:38.389844  272363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1019 17:16:38.416463  272363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 17:16:38.441481  272363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1019 17:16:38.468211  272363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/newest-cni-848035/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1019 17:16:38.490019  272363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/newest-cni-848035/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1019 17:16:38.508191  272363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/newest-cni-848035/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 17:16:38.528721  272363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/newest-cni-848035/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1019 17:16:38.548883  272363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/certs/7228.pem --> /usr/share/ca-certificates/7228.pem (1338 bytes)
	I1019 17:16:38.568140  272363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/files/etc/ssl/certs/72282.pem --> /usr/share/ca-certificates/72282.pem (1708 bytes)
	I1019 17:16:38.587376  272363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 17:16:38.615831  272363 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 17:16:38.633521  272363 ssh_runner.go:195] Run: openssl version
	I1019 17:16:38.640143  272363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7228.pem && ln -fs /usr/share/ca-certificates/7228.pem /etc/ssl/certs/7228.pem"
	I1019 17:16:38.650006  272363 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7228.pem
	I1019 17:16:38.654634  272363 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 16:26 /usr/share/ca-certificates/7228.pem
	I1019 17:16:38.654709  272363 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7228.pem
	I1019 17:16:38.693942  272363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7228.pem /etc/ssl/certs/51391683.0"
	I1019 17:16:38.703190  272363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/72282.pem && ln -fs /usr/share/ca-certificates/72282.pem /etc/ssl/certs/72282.pem"
	I1019 17:16:38.712169  272363 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/72282.pem
	I1019 17:16:38.716121  272363 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 16:26 /usr/share/ca-certificates/72282.pem
	I1019 17:16:38.716173  272363 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/72282.pem
	I1019 17:16:38.753613  272363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/72282.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 17:16:38.762997  272363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 17:16:38.773047  272363 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:16:38.777659  272363 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 16:21 /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:16:38.777718  272363 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:16:38.819882  272363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 17:16:38.828357  272363 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 17:16:38.832536  272363 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1019 17:16:38.871378  272363 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1019 17:16:38.920495  272363 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1019 17:16:38.971530  272363 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1019 17:16:39.034802  272363 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1019 17:16:39.099512  272363 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1019 17:16:39.169716  272363 kubeadm.go:401] StartCluster: {Name:newest-cni-848035 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-848035 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:16:39.169962  272363 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 17:16:39.170050  272363 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 17:16:39.217835  272363 cri.go:89] found id: "7161f4ea7f31214468dd438ccf92489be711cdfc8d6872eaa7921269b21b986f"
	I1019 17:16:39.217862  272363 cri.go:89] found id: "82ba1cda1f516989069b3e38d150e44948c2f2be79b66e33581f811e725c1136"
	I1019 17:16:39.217868  272363 cri.go:89] found id: "1c8d739ff68a706977ec10a4d83ed670ded08b67ebc1e618d401e2ecdfa2191e"
	I1019 17:16:39.217872  272363 cri.go:89] found id: "82a653d8de9363d72328ad5104900829cd8f26df51d681e55e0be8cc95ec3727"
	I1019 17:16:39.217876  272363 cri.go:89] found id: ""
	I1019 17:16:39.217923  272363 ssh_runner.go:195] Run: sudo runc list -f json
	W1019 17:16:39.233895  272363 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:16:39Z" level=error msg="open /run/runc: no such file or directory"
	I1019 17:16:39.233963  272363 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 17:16:39.245180  272363 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1019 17:16:39.245204  272363 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1019 17:16:39.245254  272363 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1019 17:16:39.255914  272363 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1019 17:16:39.257057  272363 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-848035" does not appear in /home/jenkins/minikube-integration/21683-3731/kubeconfig
	I1019 17:16:39.257876  272363 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-3731/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-848035" cluster setting kubeconfig missing "newest-cni-848035" context setting]
	I1019 17:16:39.258975  272363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/kubeconfig: {Name:mk4cdfafbafeae33568865a60cf929c656d55c5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:16:39.261010  272363 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1019 17:16:39.271996  272363 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1019 17:16:39.272139  272363 kubeadm.go:602] duration metric: took 26.818659ms to restartPrimaryControlPlane
	I1019 17:16:39.272162  272363 kubeadm.go:403] duration metric: took 102.450444ms to StartCluster
	I1019 17:16:39.272182  272363 settings.go:142] acquiring lock: {Name:mk205bd8d663b4a1850184d0f589700c0d2429b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:16:39.272253  272363 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-3731/kubeconfig
	I1019 17:16:39.274855  272363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/kubeconfig: {Name:mk4cdfafbafeae33568865a60cf929c656d55c5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:16:39.275442  272363 config.go:182] Loaded profile config "newest-cni-848035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:16:39.275528  272363 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 17:16:39.275568  272363 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 17:16:39.275712  272363 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-848035"
	I1019 17:16:39.275727  272363 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-848035"
	W1019 17:16:39.275735  272363 addons.go:248] addon storage-provisioner should already be in state true
	I1019 17:16:39.275736  272363 addons.go:70] Setting dashboard=true in profile "newest-cni-848035"
	I1019 17:16:39.275751  272363 addons.go:239] Setting addon dashboard=true in "newest-cni-848035"
	W1019 17:16:39.275758  272363 addons.go:248] addon dashboard should already be in state true
	I1019 17:16:39.275776  272363 host.go:66] Checking if "newest-cni-848035" exists ...
	I1019 17:16:39.275780  272363 host.go:66] Checking if "newest-cni-848035" exists ...
	I1019 17:16:39.276271  272363 cli_runner.go:164] Run: docker container inspect newest-cni-848035 --format={{.State.Status}}
	I1019 17:16:39.276271  272363 cli_runner.go:164] Run: docker container inspect newest-cni-848035 --format={{.State.Status}}
	I1019 17:16:39.276357  272363 addons.go:70] Setting default-storageclass=true in profile "newest-cni-848035"
	I1019 17:16:39.276384  272363 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-848035"
	I1019 17:16:39.276705  272363 cli_runner.go:164] Run: docker container inspect newest-cni-848035 --format={{.State.Status}}
	I1019 17:16:39.280987  272363 out.go:179] * Verifying Kubernetes components...
	I1019 17:16:39.283264  272363 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:16:39.305744  272363 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 17:16:39.305768  272363 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1019 17:16:39.308123  272363 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1019 17:16:34.550249  268862 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1019 17:16:34.555727  268862 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 17:16:34.555751  268862 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 17:16:35.049375  268862 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1019 17:16:35.053837  268862 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1019 17:16:35.054853  268862 api_server.go:141] control plane version: v1.34.1
	I1019 17:16:35.054877  268862 api_server.go:131] duration metric: took 1.00555724s to wait for apiserver health ...
	I1019 17:16:35.054886  268862 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 17:16:35.058519  268862 system_pods.go:59] 8 kube-system pods found
	I1019 17:16:35.058562  268862 system_pods.go:61] "coredns-66bc5c9577-zw7d8" [e1cb390d-b0bd-4da0-9e8a-92250e2485cf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:16:35.058573  268862 system_pods.go:61] "etcd-embed-certs-090139" [4082e3bc-d44c-4d23-83ab-6640758b2707] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 17:16:35.058584  268862 system_pods.go:61] "kindnet-dwsh7" [e081eba9-4c2c-401b-84d2-1bfdd53460e9] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1019 17:16:35.058590  268862 system_pods.go:61] "kube-apiserver-embed-certs-090139" [12c08735-a8cb-48a8-98ff-f464a4a93d5f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 17:16:35.058599  268862 system_pods.go:61] "kube-controller-manager-embed-certs-090139" [63f19dee-5d68-40ef-b15a-830203608d80] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 17:16:35.058605  268862 system_pods.go:61] "kube-proxy-8f4lh" [5baffb03-44e9-4304-a146-40598b517031] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1019 17:16:35.058612  268862 system_pods.go:61] "kube-scheduler-embed-certs-090139" [38e53961-5825-4991-8ee7-21f75edb86ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 17:16:35.058618  268862 system_pods.go:61] "storage-provisioner" [761c74ff-17e1-44c3-b64d-dd9c9f9863d0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 17:16:35.058624  268862 system_pods.go:74] duration metric: took 3.732419ms to wait for pod list to return data ...
	I1019 17:16:35.058634  268862 default_sa.go:34] waiting for default service account to be created ...
	I1019 17:16:35.060857  268862 default_sa.go:45] found service account: "default"
	I1019 17:16:35.060874  268862 default_sa.go:55] duration metric: took 2.235746ms for default service account to be created ...
	I1019 17:16:35.060881  268862 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 17:16:35.063423  268862 system_pods.go:86] 8 kube-system pods found
	I1019 17:16:35.063448  268862 system_pods.go:89] "coredns-66bc5c9577-zw7d8" [e1cb390d-b0bd-4da0-9e8a-92250e2485cf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:16:35.063455  268862 system_pods.go:89] "etcd-embed-certs-090139" [4082e3bc-d44c-4d23-83ab-6640758b2707] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 17:16:35.063462  268862 system_pods.go:89] "kindnet-dwsh7" [e081eba9-4c2c-401b-84d2-1bfdd53460e9] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1019 17:16:35.063471  268862 system_pods.go:89] "kube-apiserver-embed-certs-090139" [12c08735-a8cb-48a8-98ff-f464a4a93d5f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 17:16:35.063478  268862 system_pods.go:89] "kube-controller-manager-embed-certs-090139" [63f19dee-5d68-40ef-b15a-830203608d80] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 17:16:35.063488  268862 system_pods.go:89] "kube-proxy-8f4lh" [5baffb03-44e9-4304-a146-40598b517031] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1019 17:16:35.063497  268862 system_pods.go:89] "kube-scheduler-embed-certs-090139" [38e53961-5825-4991-8ee7-21f75edb86ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 17:16:35.063505  268862 system_pods.go:89] "storage-provisioner" [761c74ff-17e1-44c3-b64d-dd9c9f9863d0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 17:16:35.063516  268862 system_pods.go:126] duration metric: took 2.629242ms to wait for k8s-apps to be running ...
	I1019 17:16:35.063524  268862 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 17:16:35.063563  268862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:16:35.076395  268862 system_svc.go:56] duration metric: took 12.863749ms WaitForService to wait for kubelet
	I1019 17:16:35.076420  268862 kubeadm.go:587] duration metric: took 3.228442809s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 17:16:35.076437  268862 node_conditions.go:102] verifying NodePressure condition ...
	I1019 17:16:35.079642  268862 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1019 17:16:35.079669  268862 node_conditions.go:123] node cpu capacity is 8
	I1019 17:16:35.079684  268862 node_conditions.go:105] duration metric: took 3.235964ms to run NodePressure ...
	I1019 17:16:35.079698  268862 start.go:242] waiting for startup goroutines ...
	I1019 17:16:35.079712  268862 start.go:247] waiting for cluster config update ...
	I1019 17:16:35.079724  268862 start.go:256] writing updated cluster config ...
	I1019 17:16:35.080036  268862 ssh_runner.go:195] Run: rm -f paused
	I1019 17:16:35.083786  268862 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 17:16:35.087316  268862 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zw7d8" in "kube-system" namespace to be "Ready" or be gone ...
	W1019 17:16:37.094187  268862 pod_ready.go:104] pod "coredns-66bc5c9577-zw7d8" is not "Ready", error: <nil>
	W1019 17:16:39.105580  268862 pod_ready.go:104] pod "coredns-66bc5c9577-zw7d8" is not "Ready", error: <nil>
	I1019 17:16:39.308726  272363 addons.go:239] Setting addon default-storageclass=true in "newest-cni-848035"
	W1019 17:16:39.308753  272363 addons.go:248] addon default-storageclass should already be in state true
	I1019 17:16:39.308782  272363 host.go:66] Checking if "newest-cni-848035" exists ...
	I1019 17:16:39.309191  272363 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:16:39.309210  272363 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 17:16:39.309351  272363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-848035
	I1019 17:16:39.309359  272363 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1019 17:16:39.309373  272363 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1019 17:16:39.309432  272363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-848035
	I1019 17:16:39.309712  272363 cli_runner.go:164] Run: docker container inspect newest-cni-848035 --format={{.State.Status}}
	I1019 17:16:39.343585  272363 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 17:16:39.343613  272363 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 17:16:39.343675  272363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-848035
	I1019 17:16:39.344043  272363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/newest-cni-848035/id_rsa Username:docker}
	I1019 17:16:39.345230  272363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/newest-cni-848035/id_rsa Username:docker}
	I1019 17:16:39.372756  272363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/newest-cni-848035/id_rsa Username:docker}
	I1019 17:16:39.471918  272363 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:16:39.480326  272363 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1019 17:16:39.480419  272363 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1019 17:16:39.480453  272363 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:16:39.506213  272363 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 17:16:39.513765  272363 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1019 17:16:39.513796  272363 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1019 17:16:39.537709  272363 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1019 17:16:39.537736  272363 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1019 17:16:39.556979  272363 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1019 17:16:39.557004  272363 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1019 17:16:39.578503  272363 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1019 17:16:39.578529  272363 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1019 17:16:39.599401  272363 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1019 17:16:39.599437  272363 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1019 17:16:39.616578  272363 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1019 17:16:39.616607  272363 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1019 17:16:39.630908  272363 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1019 17:16:39.630950  272363 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1019 17:16:39.644651  272363 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1019 17:16:39.644678  272363 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1019 17:16:39.660847  272363 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1019 17:16:41.672703  272363 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.20068394s)
	I1019 17:16:41.672780  272363 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.192300061s)
	I1019 17:16:41.672821  272363 api_server.go:52] waiting for apiserver process to appear ...
	I1019 17:16:41.672878  272363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 17:16:41.673187  272363 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.16688659s)
	I1019 17:16:41.673668  272363 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.012779902s)
	I1019 17:16:41.676143  272363 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-848035 addons enable metrics-server
	
	I1019 17:16:41.689805  272363 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1019 17:16:41.691507  272363 addons.go:515] duration metric: took 2.415945834s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1019 17:16:41.692321  272363 api_server.go:72] duration metric: took 2.416731792s to wait for apiserver process to appear ...
	I1019 17:16:41.692341  272363 api_server.go:88] waiting for apiserver healthz status ...
	I1019 17:16:41.692359  272363 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1019 17:16:41.697526  272363 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 17:16:41.697550  272363 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 17:16:42.193291  272363 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1019 17:16:42.198684  272363 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1019 17:16:42.199865  272363 api_server.go:141] control plane version: v1.34.1
	I1019 17:16:42.199892  272363 api_server.go:131] duration metric: took 507.542957ms to wait for apiserver health ...
	I1019 17:16:42.199903  272363 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 17:16:42.203562  272363 system_pods.go:59] 8 kube-system pods found
	I1019 17:16:42.203609  272363 system_pods.go:61] "coredns-66bc5c9577-4r958" [c909784b-62ef-4de8-8c71-0fdb70321fab] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1019 17:16:42.203630  272363 system_pods.go:61] "etcd-newest-cni-848035" [d4f66958-0c51-495b-9982-fbc5fa2eaf5c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 17:16:42.203643  272363 system_pods.go:61] "kindnet-cldtb" [3371a4c2-e7be-4f7c-9e77-69cce40a6458] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1019 17:16:42.203654  272363 system_pods.go:61] "kube-apiserver-newest-cni-848035" [0381a31e-6a3a-48c8-acb0-905da2c9e2c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 17:16:42.203668  272363 system_pods.go:61] "kube-controller-manager-newest-cni-848035" [26b369eb-d2b2-488b-afd2-8958a5a8f955] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 17:16:42.203681  272363 system_pods.go:61] "kube-proxy-4xgrb" [f332f5bc-a940-414b-816d-fe262c303b5a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1019 17:16:42.203690  272363 system_pods.go:61] "kube-scheduler-newest-cni-848035" [22b426e0-aafb-4a62-9535-894b75da5f59] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 17:16:42.203697  272363 system_pods.go:61] "storage-provisioner" [b4254e2b-7d6f-4957-a6c4-81ea56715968] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1019 17:16:42.203704  272363 system_pods.go:74] duration metric: took 3.794877ms to wait for pod list to return data ...
	I1019 17:16:42.203714  272363 default_sa.go:34] waiting for default service account to be created ...
	I1019 17:16:42.206443  272363 default_sa.go:45] found service account: "default"
	I1019 17:16:42.206466  272363 default_sa.go:55] duration metric: took 2.745259ms for default service account to be created ...
	I1019 17:16:42.206479  272363 kubeadm.go:587] duration metric: took 2.930891741s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1019 17:16:42.206501  272363 node_conditions.go:102] verifying NodePressure condition ...
	I1019 17:16:42.209663  272363 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1019 17:16:42.209692  272363 node_conditions.go:123] node cpu capacity is 8
	I1019 17:16:42.209707  272363 node_conditions.go:105] duration metric: took 3.199964ms to run NodePressure ...
	I1019 17:16:42.209721  272363 start.go:242] waiting for startup goroutines ...
	I1019 17:16:42.209745  272363 start.go:247] waiting for cluster config update ...
	I1019 17:16:42.209759  272363 start.go:256] writing updated cluster config ...
	I1019 17:16:42.210186  272363 ssh_runner.go:195] Run: rm -f paused
	I1019 17:16:42.281420  272363 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1019 17:16:42.284529  272363 out.go:179] * Done! kubectl is now configured to use "newest-cni-848035" cluster and "default" namespace by default
	I1019 17:16:38.497872  274481 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-663015" ...
	I1019 17:16:38.497943  274481 cli_runner.go:164] Run: docker start default-k8s-diff-port-663015
	I1019 17:16:38.754432  274481 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-663015 --format={{.State.Status}}
	I1019 17:16:38.775319  274481 kic.go:430] container "default-k8s-diff-port-663015" state is running.
	I1019 17:16:38.775801  274481 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-663015
	I1019 17:16:38.796607  274481 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/config.json ...
	I1019 17:16:38.796868  274481 machine.go:94] provisionDockerMachine start ...
	I1019 17:16:38.796955  274481 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-663015
	I1019 17:16:38.817427  274481 main.go:143] libmachine: Using SSH client type: native
	I1019 17:16:38.817764  274481 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I1019 17:16:38.817786  274481 main.go:143] libmachine: About to run SSH command:
	hostname
	I1019 17:16:38.818415  274481 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56424->127.0.0.1:33099: read: connection reset by peer
	I1019 17:16:41.968235  274481 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-663015
	
	I1019 17:16:41.968269  274481 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-663015"
	I1019 17:16:41.968336  274481 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-663015
	I1019 17:16:41.995630  274481 main.go:143] libmachine: Using SSH client type: native
	I1019 17:16:41.995938  274481 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I1019 17:16:41.995968  274481 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-663015 && echo "default-k8s-diff-port-663015" | sudo tee /etc/hostname
	I1019 17:16:42.165798  274481 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-663015
	
	I1019 17:16:42.165916  274481 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-663015
	I1019 17:16:42.192141  274481 main.go:143] libmachine: Using SSH client type: native
	I1019 17:16:42.192575  274481 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I1019 17:16:42.192617  274481 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-663015' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-663015/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-663015' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 17:16:42.347247  274481 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1019 17:16:42.347335  274481 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-3731/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-3731/.minikube}
	I1019 17:16:42.347378  274481 ubuntu.go:190] setting up certificates
	I1019 17:16:42.347392  274481 provision.go:84] configureAuth start
	I1019 17:16:42.347449  274481 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-663015
	I1019 17:16:42.370961  274481 provision.go:143] copyHostCerts
	I1019 17:16:42.371034  274481 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-3731/.minikube/ca.pem, removing ...
	I1019 17:16:42.371052  274481 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-3731/.minikube/ca.pem
	I1019 17:16:42.371141  274481 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-3731/.minikube/ca.pem (1082 bytes)
	I1019 17:16:42.371290  274481 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-3731/.minikube/cert.pem, removing ...
	I1019 17:16:42.371306  274481 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-3731/.minikube/cert.pem
	I1019 17:16:42.371351  274481 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-3731/.minikube/cert.pem (1123 bytes)
	I1019 17:16:42.371437  274481 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-3731/.minikube/key.pem, removing ...
	I1019 17:16:42.371450  274481 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-3731/.minikube/key.pem
	I1019 17:16:42.371487  274481 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-3731/.minikube/key.pem (1679 bytes)
	I1019 17:16:42.371561  274481 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-3731/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-663015 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-663015 localhost minikube]
	I1019 17:16:42.431792  274481 provision.go:177] copyRemoteCerts
	I1019 17:16:42.431880  274481 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 17:16:42.431924  274481 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-663015
	I1019 17:16:42.453388  274481 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/default-k8s-diff-port-663015/id_rsa Username:docker}
	I1019 17:16:42.559643  274481 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 17:16:42.580943  274481 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1019 17:16:42.607817  274481 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1019 17:16:42.630023  274481 provision.go:87] duration metric: took 282.6201ms to configureAuth
	I1019 17:16:42.630047  274481 ubuntu.go:206] setting minikube options for container-runtime
	I1019 17:16:42.630243  274481 config.go:182] Loaded profile config "default-k8s-diff-port-663015": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:16:42.630362  274481 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-663015
	I1019 17:16:42.650443  274481 main.go:143] libmachine: Using SSH client type: native
	I1019 17:16:42.650784  274481 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I1019 17:16:42.650815  274481 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 17:16:39.790368  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:16:39.790807  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1019 17:16:39.790874  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1019 17:16:39.790928  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1019 17:16:39.829893  219832 cri.go:89] found id: "55181ab12a75f0828e8a499b545960e5dfe82c07024572a8d9b4566b0da1fba4"
	I1019 17:16:39.829917  219832 cri.go:89] found id: ""
	I1019 17:16:39.829927  219832 logs.go:282] 1 containers: [55181ab12a75f0828e8a499b545960e5dfe82c07024572a8d9b4566b0da1fba4]
	I1019 17:16:39.829980  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:16:39.834841  219832 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1019 17:16:39.834909  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1019 17:16:39.875492  219832 cri.go:89] found id: ""
	I1019 17:16:39.875518  219832 logs.go:282] 0 containers: []
	W1019 17:16:39.875528  219832 logs.go:284] No container was found matching "etcd"
	I1019 17:16:39.875535  219832 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1019 17:16:39.875589  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1019 17:16:39.911131  219832 cri.go:89] found id: ""
	I1019 17:16:39.911158  219832 logs.go:282] 0 containers: []
	W1019 17:16:39.911169  219832 logs.go:284] No container was found matching "coredns"
	I1019 17:16:39.911181  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1019 17:16:39.911241  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1019 17:16:39.950100  219832 cri.go:89] found id: "409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:16:39.950126  219832 cri.go:89] found id: ""
	I1019 17:16:39.950136  219832 logs.go:282] 1 containers: [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f]
	I1019 17:16:39.950192  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:16:39.955475  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1019 17:16:39.955543  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1019 17:16:39.990249  219832 cri.go:89] found id: ""
	I1019 17:16:39.990281  219832 logs.go:282] 0 containers: []
	W1019 17:16:39.990291  219832 logs.go:284] No container was found matching "kube-proxy"
	I1019 17:16:39.990299  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1019 17:16:39.990352  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1019 17:16:40.023216  219832 cri.go:89] found id: "44a80a801bf4fae14b89098891aceb014a4308a5880e6bb47d1db7c41321104d"
	I1019 17:16:40.023240  219832 cri.go:89] found id: ""
	I1019 17:16:40.023250  219832 logs.go:282] 1 containers: [44a80a801bf4fae14b89098891aceb014a4308a5880e6bb47d1db7c41321104d]
	I1019 17:16:40.023308  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:16:40.027512  219832 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1019 17:16:40.027576  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1019 17:16:40.061930  219832 cri.go:89] found id: ""
	I1019 17:16:40.061965  219832 logs.go:282] 0 containers: []
	W1019 17:16:40.061973  219832 logs.go:284] No container was found matching "kindnet"
	I1019 17:16:40.061978  219832 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1019 17:16:40.062024  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1019 17:16:40.096738  219832 cri.go:89] found id: ""
	I1019 17:16:40.096772  219832 logs.go:282] 0 containers: []
	W1019 17:16:40.096787  219832 logs.go:284] No container was found matching "storage-provisioner"
	I1019 17:16:40.096798  219832 logs.go:123] Gathering logs for kubelet ...
	I1019 17:16:40.097021  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1019 17:16:40.250474  219832 logs.go:123] Gathering logs for dmesg ...
	I1019 17:16:40.250525  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1019 17:16:40.272407  219832 logs.go:123] Gathering logs for describe nodes ...
	I1019 17:16:40.272446  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1019 17:16:40.363698  219832 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1019 17:16:40.363724  219832 logs.go:123] Gathering logs for kube-apiserver [55181ab12a75f0828e8a499b545960e5dfe82c07024572a8d9b4566b0da1fba4] ...
	I1019 17:16:40.363742  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 55181ab12a75f0828e8a499b545960e5dfe82c07024572a8d9b4566b0da1fba4"
	I1019 17:16:40.412778  219832 logs.go:123] Gathering logs for kube-scheduler [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f] ...
	I1019 17:16:40.412825  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:16:40.504759  219832 logs.go:123] Gathering logs for kube-controller-manager [44a80a801bf4fae14b89098891aceb014a4308a5880e6bb47d1db7c41321104d] ...
	I1019 17:16:40.505124  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 44a80a801bf4fae14b89098891aceb014a4308a5880e6bb47d1db7c41321104d"
	I1019 17:16:40.556041  219832 logs.go:123] Gathering logs for CRI-O ...
	I1019 17:16:40.556086  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1019 17:16:40.631963  219832 logs.go:123] Gathering logs for container status ...
	I1019 17:16:40.632012  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1019 17:16:43.169211  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:16:43.169777  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1019 17:16:43.169946  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1019 17:16:43.170005  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1019 17:16:43.212032  219832 cri.go:89] found id: "55181ab12a75f0828e8a499b545960e5dfe82c07024572a8d9b4566b0da1fba4"
	I1019 17:16:43.212057  219832 cri.go:89] found id: ""
	I1019 17:16:43.212091  219832 logs.go:282] 1 containers: [55181ab12a75f0828e8a499b545960e5dfe82c07024572a8d9b4566b0da1fba4]
	I1019 17:16:43.212147  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:16:43.217841  219832 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1019 17:16:43.217913  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1019 17:16:43.252451  219832 cri.go:89] found id: ""
	I1019 17:16:43.252479  219832 logs.go:282] 0 containers: []
	W1019 17:16:43.252491  219832 logs.go:284] No container was found matching "etcd"
	I1019 17:16:43.252499  219832 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1019 17:16:43.252576  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1019 17:16:43.293365  219832 cri.go:89] found id: ""
	I1019 17:16:43.293404  219832 logs.go:282] 0 containers: []
	W1019 17:16:43.293416  219832 logs.go:284] No container was found matching "coredns"
	I1019 17:16:43.293423  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1019 17:16:43.293481  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1019 17:16:43.325984  219832 cri.go:89] found id: "409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:16:43.326016  219832 cri.go:89] found id: ""
	I1019 17:16:43.326026  219832 logs.go:282] 1 containers: [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f]
	I1019 17:16:43.326110  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:16:43.329938  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1019 17:16:43.330005  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1019 17:16:43.370926  219832 cri.go:89] found id: ""
	I1019 17:16:43.370955  219832 logs.go:282] 0 containers: []
	W1019 17:16:43.370965  219832 logs.go:284] No container was found matching "kube-proxy"
	I1019 17:16:43.370974  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1019 17:16:43.371057  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1019 17:16:43.411453  219832 cri.go:89] found id: "44a80a801bf4fae14b89098891aceb014a4308a5880e6bb47d1db7c41321104d"
	I1019 17:16:43.411478  219832 cri.go:89] found id: ""
	I1019 17:16:43.411489  219832 logs.go:282] 1 containers: [44a80a801bf4fae14b89098891aceb014a4308a5880e6bb47d1db7c41321104d]
	I1019 17:16:43.411541  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:16:43.415693  219832 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1019 17:16:43.415748  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1019 17:16:43.449970  219832 cri.go:89] found id: ""
	I1019 17:16:43.449998  219832 logs.go:282] 0 containers: []
	W1019 17:16:43.450007  219832 logs.go:284] No container was found matching "kindnet"
	I1019 17:16:43.450014  219832 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1019 17:16:43.450093  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1019 17:16:43.490308  219832 cri.go:89] found id: ""
	I1019 17:16:43.490334  219832 logs.go:282] 0 containers: []
	W1019 17:16:43.490343  219832 logs.go:284] No container was found matching "storage-provisioner"
	I1019 17:16:43.490353  219832 logs.go:123] Gathering logs for kube-apiserver [55181ab12a75f0828e8a499b545960e5dfe82c07024572a8d9b4566b0da1fba4] ...
	I1019 17:16:43.490368  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 55181ab12a75f0828e8a499b545960e5dfe82c07024572a8d9b4566b0da1fba4"
	W1019 17:16:41.594037  268862 pod_ready.go:104] pod "coredns-66bc5c9577-zw7d8" is not "Ready", error: <nil>
	W1019 17:16:44.095881  268862 pod_ready.go:104] pod "coredns-66bc5c9577-zw7d8" is not "Ready", error: <nil>
	I1019 17:16:43.758624  274481 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 17:16:43.758653  274481 machine.go:97] duration metric: took 4.961766704s to provisionDockerMachine
	I1019 17:16:43.758666  274481 start.go:293] postStartSetup for "default-k8s-diff-port-663015" (driver="docker")
	I1019 17:16:43.758681  274481 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 17:16:43.758749  274481 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 17:16:43.758801  274481 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-663015
	I1019 17:16:43.782192  274481 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/default-k8s-diff-port-663015/id_rsa Username:docker}
	I1019 17:16:43.891871  274481 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 17:16:43.896719  274481 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 17:16:43.896841  274481 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 17:16:43.896857  274481 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-3731/.minikube/addons for local assets ...
	I1019 17:16:43.896924  274481 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-3731/.minikube/files for local assets ...
	I1019 17:16:43.897077  274481 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-3731/.minikube/files/etc/ssl/certs/72282.pem -> 72282.pem in /etc/ssl/certs
	I1019 17:16:43.897199  274481 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 17:16:43.908894  274481 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/files/etc/ssl/certs/72282.pem --> /etc/ssl/certs/72282.pem (1708 bytes)
	I1019 17:16:43.947756  274481 start.go:296] duration metric: took 189.073166ms for postStartSetup
	I1019 17:16:43.947876  274481 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 17:16:43.947926  274481 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-663015
	I1019 17:16:43.975271  274481 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/default-k8s-diff-port-663015/id_rsa Username:docker}
	I1019 17:16:44.078918  274481 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 17:16:44.085274  274481 fix.go:56] duration metric: took 5.60935599s for fixHost
	I1019 17:16:44.085304  274481 start.go:83] releasing machines lock for "default-k8s-diff-port-663015", held for 5.609428848s
	I1019 17:16:44.085379  274481 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-663015
	I1019 17:16:44.110831  274481 ssh_runner.go:195] Run: cat /version.json
	I1019 17:16:44.110908  274481 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-663015
	I1019 17:16:44.111015  274481 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 17:16:44.111092  274481 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-663015
	I1019 17:16:44.135271  274481 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/default-k8s-diff-port-663015/id_rsa Username:docker}
	I1019 17:16:44.135750  274481 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/default-k8s-diff-port-663015/id_rsa Username:docker}
	I1019 17:16:44.238339  274481 ssh_runner.go:195] Run: systemctl --version
	I1019 17:16:44.320379  274481 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 17:16:44.367371  274481 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 17:16:44.373006  274481 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 17:16:44.373087  274481 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 17:16:44.383516  274481 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1019 17:16:44.383542  274481 start.go:496] detecting cgroup driver to use...
	I1019 17:16:44.383576  274481 detect.go:190] detected "systemd" cgroup driver on host os
	I1019 17:16:44.383629  274481 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 17:16:44.405372  274481 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 17:16:44.422022  274481 docker.go:218] disabling cri-docker service (if available) ...
	I1019 17:16:44.422114  274481 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 17:16:44.442186  274481 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 17:16:44.459354  274481 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 17:16:44.584124  274481 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 17:16:44.698334  274481 docker.go:234] disabling docker service ...
	I1019 17:16:44.698410  274481 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 17:16:44.717935  274481 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 17:16:44.735606  274481 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 17:16:44.856239  274481 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 17:16:44.960294  274481 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 17:16:44.974250  274481 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 17:16:44.989696  274481 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 17:16:44.989782  274481 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:16:44.999654  274481 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1019 17:16:44.999734  274481 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:16:45.014466  274481 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:16:45.026758  274481 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:16:45.039239  274481 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 17:16:45.048888  274481 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:16:45.059352  274481 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:16:45.068539  274481 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:16:45.078630  274481 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 17:16:45.087523  274481 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 17:16:45.095914  274481 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:16:45.182148  274481 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 17:16:45.673736  274481 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 17:16:45.673813  274481 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 17:16:45.678492  274481 start.go:564] Will wait 60s for crictl version
	I1019 17:16:45.678558  274481 ssh_runner.go:195] Run: which crictl
	I1019 17:16:45.682858  274481 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 17:16:45.709000  274481 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 17:16:45.709135  274481 ssh_runner.go:195] Run: crio --version
	I1019 17:16:45.737974  274481 ssh_runner.go:195] Run: crio --version
	I1019 17:16:45.767420  274481 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	
	==> CRI-O <==
	Oct 19 17:16:41 newest-cni-848035 crio[514]: time="2025-10-19T17:16:41.762215848Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-4xgrb/POD" id=05cf64d1-57fe-473f-9b26-dfa561cd1b7d name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 17:16:41 newest-cni-848035 crio[514]: time="2025-10-19T17:16:41.762303511Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:16:41 newest-cni-848035 crio[514]: time="2025-10-19T17:16:41.764933253Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 19 17:16:41 newest-cni-848035 crio[514]: time="2025-10-19T17:16:41.766040596Z" level=info msg="Ran pod sandbox f039a0d374df465cc551d5721b0186c9c29149745f538e5494a2736bd51878f4 with infra container: kube-system/kindnet-cldtb/POD" id=4b23d5eb-a75f-424a-8cfc-f4d3c587b26a name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 17:16:41 newest-cni-848035 crio[514]: time="2025-10-19T17:16:41.766569447Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=05cf64d1-57fe-473f-9b26-dfa561cd1b7d name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 17:16:41 newest-cni-848035 crio[514]: time="2025-10-19T17:16:41.767516768Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=258298a4-82ef-442c-b34b-63abd40ced06 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:16:41 newest-cni-848035 crio[514]: time="2025-10-19T17:16:41.76828476Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 19 17:16:41 newest-cni-848035 crio[514]: time="2025-10-19T17:16:41.769175959Z" level=info msg="Ran pod sandbox be267c736ec688aa18a3940a8f0b0f4ba175330ed538b5e2133a47987dae89dc with infra container: kube-system/kube-proxy-4xgrb/POD" id=05cf64d1-57fe-473f-9b26-dfa561cd1b7d name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 17:16:41 newest-cni-848035 crio[514]: time="2025-10-19T17:16:41.769209204Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=1837fe2f-b9c5-486e-927c-9489a5462d03 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:16:41 newest-cni-848035 crio[514]: time="2025-10-19T17:16:41.770180274Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=2c9f3de9-2bd1-44a1-ada8-63a932a16728 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:16:41 newest-cni-848035 crio[514]: time="2025-10-19T17:16:41.770473119Z" level=info msg="Creating container: kube-system/kindnet-cldtb/kindnet-cni" id=de18db60-e0be-4722-b305-efec2f851a42 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:16:41 newest-cni-848035 crio[514]: time="2025-10-19T17:16:41.770777859Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:16:41 newest-cni-848035 crio[514]: time="2025-10-19T17:16:41.771291016Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=8a8ef59a-9cb8-45a2-870b-cc12fbc12650 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:16:41 newest-cni-848035 crio[514]: time="2025-10-19T17:16:41.772348293Z" level=info msg="Creating container: kube-system/kube-proxy-4xgrb/kube-proxy" id=9522e815-2412-4a25-983a-c96d76cb4f6a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:16:41 newest-cni-848035 crio[514]: time="2025-10-19T17:16:41.773504104Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:16:41 newest-cni-848035 crio[514]: time="2025-10-19T17:16:41.774916892Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:16:41 newest-cni-848035 crio[514]: time="2025-10-19T17:16:41.775517207Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:16:41 newest-cni-848035 crio[514]: time="2025-10-19T17:16:41.780349346Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:16:41 newest-cni-848035 crio[514]: time="2025-10-19T17:16:41.780934094Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:16:41 newest-cni-848035 crio[514]: time="2025-10-19T17:16:41.807268339Z" level=info msg="Created container 64839406f6a583b3ec2976b1c494c642f16e35e9dc81ce7b11da661dfdc9da24: kube-system/kindnet-cldtb/kindnet-cni" id=de18db60-e0be-4722-b305-efec2f851a42 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:16:41 newest-cni-848035 crio[514]: time="2025-10-19T17:16:41.809325212Z" level=info msg="Starting container: 64839406f6a583b3ec2976b1c494c642f16e35e9dc81ce7b11da661dfdc9da24" id=85779174-a876-4bf6-900a-c889173c06cf name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 17:16:41 newest-cni-848035 crio[514]: time="2025-10-19T17:16:41.812158678Z" level=info msg="Created container bbcdda98e3574c313aedf6fad5937b0627c232e0cdf3ef130c9d87b28d06465f: kube-system/kube-proxy-4xgrb/kube-proxy" id=9522e815-2412-4a25-983a-c96d76cb4f6a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:16:41 newest-cni-848035 crio[514]: time="2025-10-19T17:16:41.812862589Z" level=info msg="Started container" PID=1031 containerID=64839406f6a583b3ec2976b1c494c642f16e35e9dc81ce7b11da661dfdc9da24 description=kube-system/kindnet-cldtb/kindnet-cni id=85779174-a876-4bf6-900a-c889173c06cf name=/runtime.v1.RuntimeService/StartContainer sandboxID=f039a0d374df465cc551d5721b0186c9c29149745f538e5494a2736bd51878f4
	Oct 19 17:16:41 newest-cni-848035 crio[514]: time="2025-10-19T17:16:41.813981896Z" level=info msg="Starting container: bbcdda98e3574c313aedf6fad5937b0627c232e0cdf3ef130c9d87b28d06465f" id=ad23ff0d-0904-4b25-ba89-2bbeba160fa5 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 17:16:41 newest-cni-848035 crio[514]: time="2025-10-19T17:16:41.817279927Z" level=info msg="Started container" PID=1032 containerID=bbcdda98e3574c313aedf6fad5937b0627c232e0cdf3ef130c9d87b28d06465f description=kube-system/kube-proxy-4xgrb/kube-proxy id=ad23ff0d-0904-4b25-ba89-2bbeba160fa5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=be267c736ec688aa18a3940a8f0b0f4ba175330ed538b5e2133a47987dae89dc
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	bbcdda98e3574       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   4 seconds ago       Running             kube-proxy                1                   be267c736ec68       kube-proxy-4xgrb                            kube-system
	64839406f6a58       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   4 seconds ago       Running             kindnet-cni               1                   f039a0d374df4       kindnet-cldtb                               kube-system
	7161f4ea7f312       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   7 seconds ago       Running             etcd                      1                   f5dd9001df0e3       etcd-newest-cni-848035                      kube-system
	82ba1cda1f516       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   7 seconds ago       Running             kube-controller-manager   1                   7dea4bff6c66b       kube-controller-manager-newest-cni-848035   kube-system
	1c8d739ff68a7       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   7 seconds ago       Running             kube-scheduler            1                   eabd9ae8ab8e4       kube-scheduler-newest-cni-848035            kube-system
	82a653d8de936       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   7 seconds ago       Running             kube-apiserver            1                   2c7a515bb7a19       kube-apiserver-newest-cni-848035            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-848035
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-848035
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34
	                    minikube.k8s.io/name=newest-cni-848035
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T17_16_21_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 17:16:18 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-848035
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 17:16:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 17:16:41 +0000   Sun, 19 Oct 2025 17:16:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 17:16:41 +0000   Sun, 19 Oct 2025 17:16:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 17:16:41 +0000   Sun, 19 Oct 2025 17:16:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 19 Oct 2025 17:16:41 +0000   Sun, 19 Oct 2025 17:16:16 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-848035
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                646096bf-34be-4122-8013-0d1b140e3606
	  Boot ID:                    74c6737e-3c93-45ee-b810-4373d2d70b9b
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-848035                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         27s
	  kube-system                 kindnet-cldtb                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      20s
	  kube-system                 kube-apiserver-newest-cni-848035             250m (3%)     0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-controller-manager-newest-cni-848035    200m (2%)     0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-proxy-4xgrb                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         20s
	  kube-system                 kube-scheduler-newest-cni-848035             100m (1%)     0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 19s   kube-proxy       
	  Normal  Starting                 4s    kube-proxy       
	  Normal  Starting                 26s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  25s   kubelet          Node newest-cni-848035 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25s   kubelet          Node newest-cni-848035 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25s   kubelet          Node newest-cni-848035 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           21s   node-controller  Node newest-cni-848035 event: Registered Node newest-cni-848035 in Controller
	  Normal  RegisteredNode           2s    node-controller  Node newest-cni-848035 event: Registered Node newest-cni-848035 in Controller
	
	
	==> dmesg <==
	[  +0.102617] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027869] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.458770] kauditd_printk_skb: 47 callbacks suppressed
	[Oct19 16:23] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.054620] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023908] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023878] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023848] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023943] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +2.047764] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +4.031517] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +8.127172] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[ +16.382276] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[Oct19 16:24] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	
	
	==> etcd [7161f4ea7f31214468dd438ccf92489be711cdfc8d6872eaa7921269b21b986f] <==
	{"level":"warn","ts":"2025-10-19T17:16:40.073092Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:40.084321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:40.096906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:40.105449Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:40.114243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:40.126635Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:40.136607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:40.144657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:40.155331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:40.162888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:40.181049Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:40.188719Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:40.198954Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:40.207276Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:40.218239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:40.227582Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:40.237729Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:40.252300Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:40.263796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:40.276856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:40.286025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:40.303359Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:40.312219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:40.323215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:40.391536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59828","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 17:16:46 up 59 min,  0 user,  load average: 4.27, 3.17, 1.92
	Linux newest-cni-848035 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [64839406f6a583b3ec2976b1c494c642f16e35e9dc81ce7b11da661dfdc9da24] <==
	I1019 17:16:42.012405       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 17:16:42.012787       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1019 17:16:42.012891       1 main.go:148] setting mtu 1500 for CNI 
	I1019 17:16:42.012905       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 17:16:42.012917       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T17:16:42Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 17:16:42.389208       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 17:16:42.389243       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 17:16:42.389255       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 17:16:42.390182       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1019 17:16:42.789360       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 17:16:42.789389       1 metrics.go:72] Registering metrics
	I1019 17:16:42.789470       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [82a653d8de9363d72328ad5104900829cd8f26df51d681e55e0be8cc95ec3727] <==
	I1019 17:16:41.039910       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1019 17:16:41.040889       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1019 17:16:41.039972       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1019 17:16:41.039951       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1019 17:16:41.039798       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1019 17:16:41.042019       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1019 17:16:41.042110       1 aggregator.go:171] initial CRD sync complete...
	I1019 17:16:41.042134       1 autoregister_controller.go:144] Starting autoregister controller
	I1019 17:16:41.042142       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1019 17:16:41.042151       1 cache.go:39] Caches are synced for autoregister controller
	I1019 17:16:41.047663       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1019 17:16:41.069447       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 17:16:41.073316       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1019 17:16:41.384267       1 controller.go:667] quota admission added evaluator for: namespaces
	I1019 17:16:41.420208       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1019 17:16:41.443901       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 17:16:41.454049       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 17:16:41.462982       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 17:16:41.541049       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.156.229"}
	I1019 17:16:41.556099       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.245.90"}
	I1019 17:16:41.943331       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 17:16:44.808524       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1019 17:16:44.856586       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 17:16:44.909736       1 controller.go:667] quota admission added evaluator for: endpoints
	I1019 17:16:44.909736       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [82ba1cda1f516989069b3e38d150e44948c2f2be79b66e33581f811e725c1136] <==
	I1019 17:16:44.403877       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1019 17:16:44.403896       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1019 17:16:44.403856       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1019 17:16:44.403968       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1019 17:16:44.403996       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1019 17:16:44.404002       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1019 17:16:44.405741       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1019 17:16:44.405741       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1019 17:16:44.405815       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1019 17:16:44.406128       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1019 17:16:44.406151       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1019 17:16:44.407009       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1019 17:16:44.407117       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1019 17:16:44.408238       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 17:16:44.410383       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1019 17:16:44.420692       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1019 17:16:44.420831       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1019 17:16:44.420895       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1019 17:16:44.420906       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1019 17:16:44.420914       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1019 17:16:44.422949       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1019 17:16:44.423111       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1019 17:16:44.423247       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-848035"
	I1019 17:16:44.423315       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1019 17:16:44.430943       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [bbcdda98e3574c313aedf6fad5937b0627c232e0cdf3ef130c9d87b28d06465f] <==
	I1019 17:16:41.854052       1 server_linux.go:53] "Using iptables proxy"
	I1019 17:16:41.911125       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 17:16:42.011903       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 17:16:42.011940       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1019 17:16:42.012052       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 17:16:42.036823       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 17:16:42.036893       1 server_linux.go:132] "Using iptables Proxier"
	I1019 17:16:42.044114       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 17:16:42.044804       1 server.go:527] "Version info" version="v1.34.1"
	I1019 17:16:42.044843       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:16:42.046892       1 config.go:106] "Starting endpoint slice config controller"
	I1019 17:16:42.046908       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 17:16:42.048159       1 config.go:200] "Starting service config controller"
	I1019 17:16:42.048174       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 17:16:42.048537       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 17:16:42.049411       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 17:16:42.048943       1 config.go:309] "Starting node config controller"
	I1019 17:16:42.049439       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 17:16:42.049447       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 17:16:42.147129       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1019 17:16:42.149060       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 17:16:42.149631       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [1c8d739ff68a706977ec10a4d83ed670ded08b67ebc1e618d401e2ecdfa2191e] <==
	I1019 17:16:40.009924       1 serving.go:386] Generated self-signed cert in-memory
	W1019 17:16:40.949473       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1019 17:16:40.949506       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1019 17:16:40.949518       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1019 17:16:40.949527       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1019 17:16:40.989677       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1019 17:16:40.989704       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:16:40.997411       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 17:16:40.997543       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 17:16:41.000623       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1019 17:16:41.000799       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1019 17:16:41.014758       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1019 17:16:41.015176       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1019 17:16:41.015030       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 17:16:41.017585       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1019 17:16:41.017695       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	I1019 17:16:41.098943       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 19 17:16:40 newest-cni-848035 kubelet[662]: E1019 17:16:40.505129     662 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-848035\" not found" node="newest-cni-848035"
	Oct 19 17:16:41 newest-cni-848035 kubelet[662]: I1019 17:16:41.053999     662 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-848035"
	Oct 19 17:16:41 newest-cni-848035 kubelet[662]: I1019 17:16:41.065208     662 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-848035"
	Oct 19 17:16:41 newest-cni-848035 kubelet[662]: I1019 17:16:41.065322     662 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-848035"
	Oct 19 17:16:41 newest-cni-848035 kubelet[662]: I1019 17:16:41.065362     662 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 19 17:16:41 newest-cni-848035 kubelet[662]: I1019 17:16:41.066270     662 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 19 17:16:41 newest-cni-848035 kubelet[662]: E1019 17:16:41.066813     662 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-848035\" already exists" pod="kube-system/etcd-newest-cni-848035"
	Oct 19 17:16:41 newest-cni-848035 kubelet[662]: I1019 17:16:41.066846     662 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-848035"
	Oct 19 17:16:41 newest-cni-848035 kubelet[662]: E1019 17:16:41.074667     662 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-848035\" already exists" pod="kube-system/kube-apiserver-newest-cni-848035"
	Oct 19 17:16:41 newest-cni-848035 kubelet[662]: I1019 17:16:41.074704     662 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-848035"
	Oct 19 17:16:41 newest-cni-848035 kubelet[662]: E1019 17:16:41.081994     662 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-848035\" already exists" pod="kube-system/kube-controller-manager-newest-cni-848035"
	Oct 19 17:16:41 newest-cni-848035 kubelet[662]: I1019 17:16:41.082043     662 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-848035"
	Oct 19 17:16:41 newest-cni-848035 kubelet[662]: E1019 17:16:41.093535     662 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-848035\" already exists" pod="kube-system/kube-scheduler-newest-cni-848035"
	Oct 19 17:16:41 newest-cni-848035 kubelet[662]: I1019 17:16:41.449546     662 apiserver.go:52] "Watching apiserver"
	Oct 19 17:16:41 newest-cni-848035 kubelet[662]: I1019 17:16:41.453256     662 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 19 17:16:41 newest-cni-848035 kubelet[662]: I1019 17:16:41.464528     662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f332f5bc-a940-414b-816d-fe262c303b5a-xtables-lock\") pod \"kube-proxy-4xgrb\" (UID: \"f332f5bc-a940-414b-816d-fe262c303b5a\") " pod="kube-system/kube-proxy-4xgrb"
	Oct 19 17:16:41 newest-cni-848035 kubelet[662]: I1019 17:16:41.464578     662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/3371a4c2-e7be-4f7c-9e77-69cce40a6458-cni-cfg\") pod \"kindnet-cldtb\" (UID: \"3371a4c2-e7be-4f7c-9e77-69cce40a6458\") " pod="kube-system/kindnet-cldtb"
	Oct 19 17:16:41 newest-cni-848035 kubelet[662]: I1019 17:16:41.464613     662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f332f5bc-a940-414b-816d-fe262c303b5a-lib-modules\") pod \"kube-proxy-4xgrb\" (UID: \"f332f5bc-a940-414b-816d-fe262c303b5a\") " pod="kube-system/kube-proxy-4xgrb"
	Oct 19 17:16:41 newest-cni-848035 kubelet[662]: I1019 17:16:41.464633     662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3371a4c2-e7be-4f7c-9e77-69cce40a6458-xtables-lock\") pod \"kindnet-cldtb\" (UID: \"3371a4c2-e7be-4f7c-9e77-69cce40a6458\") " pod="kube-system/kindnet-cldtb"
	Oct 19 17:16:41 newest-cni-848035 kubelet[662]: I1019 17:16:41.464707     662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3371a4c2-e7be-4f7c-9e77-69cce40a6458-lib-modules\") pod \"kindnet-cldtb\" (UID: \"3371a4c2-e7be-4f7c-9e77-69cce40a6458\") " pod="kube-system/kindnet-cldtb"
	Oct 19 17:16:41 newest-cni-848035 kubelet[662]: I1019 17:16:41.504781     662 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-848035"
	Oct 19 17:16:41 newest-cni-848035 kubelet[662]: E1019 17:16:41.511668     662 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-848035\" already exists" pod="kube-system/kube-apiserver-newest-cni-848035"
	Oct 19 17:16:43 newest-cni-848035 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 19 17:16:43 newest-cni-848035 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 19 17:16:43 newest-cni-848035 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-848035 -n newest-cni-848035
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-848035 -n newest-cni-848035: exit status 2 (422.469648ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-848035 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-4r958 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-zr7l6 kubernetes-dashboard-855c9754f9-4ttrr
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-848035 describe pod coredns-66bc5c9577-4r958 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-zr7l6 kubernetes-dashboard-855c9754f9-4ttrr
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-848035 describe pod coredns-66bc5c9577-4r958 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-zr7l6 kubernetes-dashboard-855c9754f9-4ttrr: exit status 1 (84.669118ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-4r958" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-zr7l6" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-4ttrr" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-848035 describe pod coredns-66bc5c9577-4r958 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-zr7l6 kubernetes-dashboard-855c9754f9-4ttrr: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-848035
helpers_test.go:243: (dbg) docker inspect newest-cni-848035:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d4878977e53a743ea778cb864c7df411e59f7f981a676f0796cbd728924f7077",
	        "Created": "2025-10-19T17:16:04.614467412Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 272742,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T17:16:32.061755844Z",
	            "FinishedAt": "2025-10-19T17:16:31.105026889Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/d4878977e53a743ea778cb864c7df411e59f7f981a676f0796cbd728924f7077/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d4878977e53a743ea778cb864c7df411e59f7f981a676f0796cbd728924f7077/hostname",
	        "HostsPath": "/var/lib/docker/containers/d4878977e53a743ea778cb864c7df411e59f7f981a676f0796cbd728924f7077/hosts",
	        "LogPath": "/var/lib/docker/containers/d4878977e53a743ea778cb864c7df411e59f7f981a676f0796cbd728924f7077/d4878977e53a743ea778cb864c7df411e59f7f981a676f0796cbd728924f7077-json.log",
	        "Name": "/newest-cni-848035",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-848035:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-848035",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d4878977e53a743ea778cb864c7df411e59f7f981a676f0796cbd728924f7077",
	                "LowerDir": "/var/lib/docker/overlay2/0f8e59503f105b74cc854aa5854cdd73481fc5e056e7a00d02958c6db21e7382-init/diff:/var/lib/docker/overlay2/0516a6de822f96a429e000bb6523437ca93a66a6aecaba561c6abf45d0d1defe/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0f8e59503f105b74cc854aa5854cdd73481fc5e056e7a00d02958c6db21e7382/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0f8e59503f105b74cc854aa5854cdd73481fc5e056e7a00d02958c6db21e7382/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0f8e59503f105b74cc854aa5854cdd73481fc5e056e7a00d02958c6db21e7382/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-848035",
	                "Source": "/var/lib/docker/volumes/newest-cni-848035/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-848035",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-848035",
	                "name.minikube.sigs.k8s.io": "newest-cni-848035",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1f60258a544ed3a3a5df9cea272b23320c3a4cf51c4f4efd7453932ba92f6a4a",
	            "SandboxKey": "/var/run/docker/netns/1f60258a544e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-848035": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "96:cf:8e:6c:be:36",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bd85ff71a8849a849b2f5448c4af9d5d2d209e0b42263ef0a6ae677b20846d2a",
	                    "EndpointID": "302dbc35bc854d4a8951a7868a9cfaf8fbf037e00c1f20a37e833b513d17e6d5",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-848035",
	                        "d4878977e53a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-848035 -n newest-cni-848035
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-848035 -n newest-cni-848035: exit status 2 (374.491226ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-848035 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-848035 logs -n 25: (1.206423403s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ image   │ old-k8s-version-904967 image list --format=json                                                                                                                                                                                               │ old-k8s-version-904967       │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ pause   │ -p old-k8s-version-904967 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-904967       │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │                     │
	│ delete  │ -p old-k8s-version-904967                                                                                                                                                                                                                     │ old-k8s-version-904967       │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ delete  │ -p old-k8s-version-904967                                                                                                                                                                                                                     │ old-k8s-version-904967       │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ delete  │ -p disable-driver-mounts-858297                                                                                                                                                                                                               │ disable-driver-mounts-858297 │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ start   │ -p default-k8s-diff-port-663015 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-663015 │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:16 UTC │
	│ image   │ no-preload-806996 image list --format=json                                                                                                                                                                                                    │ no-preload-806996            │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ pause   │ -p no-preload-806996 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-806996            │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │                     │
	│ delete  │ -p no-preload-806996                                                                                                                                                                                                                          │ no-preload-806996            │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ delete  │ -p no-preload-806996                                                                                                                                                                                                                          │ no-preload-806996            │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ start   │ -p newest-cni-848035 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-848035            │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:16 UTC │
	│ addons  │ enable metrics-server -p embed-certs-090139 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-090139           │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ stop    │ -p embed-certs-090139 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-090139           │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:16 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-663015 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-663015 │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-663015 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-663015 │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:16 UTC │
	│ addons  │ enable dashboard -p embed-certs-090139 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-090139           │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:16 UTC │
	│ start   │ -p embed-certs-090139 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-090139           │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-848035 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-848035            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ stop    │ -p newest-cni-848035 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-848035            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:16 UTC │
	│ addons  │ enable dashboard -p newest-cni-848035 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-848035            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:16 UTC │
	│ start   │ -p newest-cni-848035 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-848035            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:16 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-663015 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-663015 │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:16 UTC │
	│ start   │ -p default-k8s-diff-port-663015 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-663015 │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ image   │ newest-cni-848035 image list --format=json                                                                                                                                                                                                    │ newest-cni-848035            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:16 UTC │
	│ pause   │ -p newest-cni-848035 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-848035            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 17:16:38
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 17:16:38.258301  274481 out.go:360] Setting OutFile to fd 1 ...
	I1019 17:16:38.258615  274481 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:16:38.258628  274481 out.go:374] Setting ErrFile to fd 2...
	I1019 17:16:38.258635  274481 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:16:38.258873  274481 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3731/.minikube/bin
	I1019 17:16:38.259386  274481 out.go:368] Setting JSON to false
	I1019 17:16:38.260604  274481 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3544,"bootTime":1760890654,"procs":341,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 17:16:38.260696  274481 start.go:143] virtualization: kvm guest
	I1019 17:16:38.262889  274481 out.go:179] * [default-k8s-diff-port-663015] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 17:16:38.264756  274481 notify.go:221] Checking for updates...
	I1019 17:16:38.264784  274481 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 17:16:38.266508  274481 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 17:16:38.267722  274481 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-3731/kubeconfig
	I1019 17:16:38.269377  274481 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-3731/.minikube
	I1019 17:16:38.271505  274481 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 17:16:38.273134  274481 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 17:16:38.274964  274481 config.go:182] Loaded profile config "default-k8s-diff-port-663015": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:16:38.275647  274481 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 17:16:38.303784  274481 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1019 17:16:38.303857  274481 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:16:38.368133  274481 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-19 17:16:38.357874865 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 17:16:38.368275  274481 docker.go:319] overlay module found
	I1019 17:16:38.371129  274481 out.go:179] * Using the docker driver based on existing profile
	I1019 17:16:38.372655  274481 start.go:309] selected driver: docker
	I1019 17:16:38.372674  274481 start.go:930] validating driver "docker" against &{Name:default-k8s-diff-port-663015 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-663015 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:16:38.372778  274481 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 17:16:38.373440  274481 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:16:38.440582  274481 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-19 17:16:38.429155941 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 17:16:38.440968  274481 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 17:16:38.441003  274481 cni.go:84] Creating CNI manager for ""
	I1019 17:16:38.441063  274481 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:16:38.441127  274481 start.go:353] cluster config:
	{Name:default-k8s-diff-port-663015 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-663015 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:16:38.443264  274481 out.go:179] * Starting "default-k8s-diff-port-663015" primary control-plane node in "default-k8s-diff-port-663015" cluster
	I1019 17:16:38.444561  274481 cache.go:124] Beginning downloading kic base image for docker with crio
	I1019 17:16:38.445931  274481 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 17:16:38.447168  274481 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:16:38.447219  274481 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-3731/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1019 17:16:38.447232  274481 cache.go:59] Caching tarball of preloaded images
	I1019 17:16:38.447331  274481 preload.go:233] Found /home/jenkins/minikube-integration/21683-3731/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1019 17:16:38.447342  274481 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 17:16:38.447483  274481 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/config.json ...
	I1019 17:16:38.447738  274481 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 17:16:38.475723  274481 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 17:16:38.475744  274481 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 17:16:38.475767  274481 cache.go:233] Successfully downloaded all kic artifacts
	I1019 17:16:38.475796  274481 start.go:360] acquireMachinesLock for default-k8s-diff-port-663015: {Name:mkc3b977c4f353256fa3816417a52809b235a030 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 17:16:38.475861  274481 start.go:364] duration metric: took 43.597µs to acquireMachinesLock for "default-k8s-diff-port-663015"
	I1019 17:16:38.475884  274481 start.go:96] Skipping create...Using existing machine configuration
	I1019 17:16:38.475912  274481 fix.go:54] fixHost starting: 
	I1019 17:16:38.476205  274481 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-663015 --format={{.State.Status}}
	I1019 17:16:38.495784  274481 fix.go:112] recreateIfNeeded on default-k8s-diff-port-663015: state=Stopped err=<nil>
	W1019 17:16:38.495815  274481 fix.go:138] unexpected machine state, will restart: <nil>
	I1019 17:16:33.541492  219832 cri.go:89] found id: ""
	I1019 17:16:33.541549  219832 logs.go:282] 0 containers: []
	W1019 17:16:33.541561  219832 logs.go:284] No container was found matching "coredns"
	I1019 17:16:33.541570  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1019 17:16:33.541691  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1019 17:16:33.576629  219832 cri.go:89] found id: "409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:16:33.576656  219832 cri.go:89] found id: ""
	I1019 17:16:33.576675  219832 logs.go:282] 1 containers: [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f]
	I1019 17:16:33.576732  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:16:33.580857  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1019 17:16:33.580928  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1019 17:16:33.612243  219832 cri.go:89] found id: ""
	I1019 17:16:33.612270  219832 logs.go:282] 0 containers: []
	W1019 17:16:33.612280  219832 logs.go:284] No container was found matching "kube-proxy"
	I1019 17:16:33.612289  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1019 17:16:33.612354  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1019 17:16:33.646219  219832 cri.go:89] found id: "44a80a801bf4fae14b89098891aceb014a4308a5880e6bb47d1db7c41321104d"
	I1019 17:16:33.646244  219832 cri.go:89] found id: ""
	I1019 17:16:33.646254  219832 logs.go:282] 1 containers: [44a80a801bf4fae14b89098891aceb014a4308a5880e6bb47d1db7c41321104d]
	I1019 17:16:33.646315  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:16:33.650533  219832 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1019 17:16:33.650597  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1019 17:16:33.678204  219832 cri.go:89] found id: ""
	I1019 17:16:33.678241  219832 logs.go:282] 0 containers: []
	W1019 17:16:33.678253  219832 logs.go:284] No container was found matching "kindnet"
	I1019 17:16:33.678261  219832 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1019 17:16:33.678316  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1019 17:16:33.707943  219832 cri.go:89] found id: ""
	I1019 17:16:33.707970  219832 logs.go:282] 0 containers: []
	W1019 17:16:33.707979  219832 logs.go:284] No container was found matching "storage-provisioner"
	I1019 17:16:33.707990  219832 logs.go:123] Gathering logs for kubelet ...
	I1019 17:16:33.708004  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1019 17:16:33.818512  219832 logs.go:123] Gathering logs for dmesg ...
	I1019 17:16:33.818552  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1019 17:16:33.835274  219832 logs.go:123] Gathering logs for describe nodes ...
	I1019 17:16:33.835301  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1019 17:16:33.906762  219832 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1019 17:16:33.906788  219832 logs.go:123] Gathering logs for kube-apiserver [55181ab12a75f0828e8a499b545960e5dfe82c07024572a8d9b4566b0da1fba4] ...
	I1019 17:16:33.906803  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 55181ab12a75f0828e8a499b545960e5dfe82c07024572a8d9b4566b0da1fba4"
	I1019 17:16:33.946207  219832 logs.go:123] Gathering logs for kube-scheduler [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f] ...
	I1019 17:16:33.946235  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:16:34.017859  219832 logs.go:123] Gathering logs for kube-controller-manager [44a80a801bf4fae14b89098891aceb014a4308a5880e6bb47d1db7c41321104d] ...
	I1019 17:16:34.017905  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 44a80a801bf4fae14b89098891aceb014a4308a5880e6bb47d1db7c41321104d"
	I1019 17:16:34.056158  219832 logs.go:123] Gathering logs for CRI-O ...
	I1019 17:16:34.056189  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1019 17:16:34.109592  219832 logs.go:123] Gathering logs for container status ...
	I1019 17:16:34.109642  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1019 17:16:36.645341  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:16:36.645714  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1019 17:16:36.645761  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1019 17:16:36.645814  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1019 17:16:36.672947  219832 cri.go:89] found id: "55181ab12a75f0828e8a499b545960e5dfe82c07024572a8d9b4566b0da1fba4"
	I1019 17:16:36.672969  219832 cri.go:89] found id: ""
	I1019 17:16:36.672977  219832 logs.go:282] 1 containers: [55181ab12a75f0828e8a499b545960e5dfe82c07024572a8d9b4566b0da1fba4]
	I1019 17:16:36.673036  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:16:36.677047  219832 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1019 17:16:36.677129  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1019 17:16:36.705274  219832 cri.go:89] found id: ""
	I1019 17:16:36.705300  219832 logs.go:282] 0 containers: []
	W1019 17:16:36.705311  219832 logs.go:284] No container was found matching "etcd"
	I1019 17:16:36.705318  219832 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1019 17:16:36.705378  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1019 17:16:36.733936  219832 cri.go:89] found id: ""
	I1019 17:16:36.733960  219832 logs.go:282] 0 containers: []
	W1019 17:16:36.733967  219832 logs.go:284] No container was found matching "coredns"
	I1019 17:16:36.733972  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1019 17:16:36.734016  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1019 17:16:36.760255  219832 cri.go:89] found id: "409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:16:36.760276  219832 cri.go:89] found id: ""
	I1019 17:16:36.760284  219832 logs.go:282] 1 containers: [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f]
	I1019 17:16:36.760339  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:16:36.764753  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1019 17:16:36.764821  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1019 17:16:36.805419  219832 cri.go:89] found id: ""
	I1019 17:16:36.805449  219832 logs.go:282] 0 containers: []
	W1019 17:16:36.805461  219832 logs.go:284] No container was found matching "kube-proxy"
	I1019 17:16:36.805472  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1019 17:16:36.805531  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1019 17:16:36.835323  219832 cri.go:89] found id: "44a80a801bf4fae14b89098891aceb014a4308a5880e6bb47d1db7c41321104d"
	I1019 17:16:36.835344  219832 cri.go:89] found id: ""
	I1019 17:16:36.835354  219832 logs.go:282] 1 containers: [44a80a801bf4fae14b89098891aceb014a4308a5880e6bb47d1db7c41321104d]
	I1019 17:16:36.835415  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:16:36.839353  219832 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1019 17:16:36.839424  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1019 17:16:36.866008  219832 cri.go:89] found id: ""
	I1019 17:16:36.866034  219832 logs.go:282] 0 containers: []
	W1019 17:16:36.866045  219832 logs.go:284] No container was found matching "kindnet"
	I1019 17:16:36.866052  219832 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1019 17:16:36.866130  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1019 17:16:36.898201  219832 cri.go:89] found id: ""
	I1019 17:16:36.898224  219832 logs.go:282] 0 containers: []
	W1019 17:16:36.898243  219832 logs.go:284] No container was found matching "storage-provisioner"
	I1019 17:16:36.898254  219832 logs.go:123] Gathering logs for container status ...
	I1019 17:16:36.898267  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1019 17:16:36.931659  219832 logs.go:123] Gathering logs for kubelet ...
	I1019 17:16:36.931693  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1019 17:16:37.032445  219832 logs.go:123] Gathering logs for dmesg ...
	I1019 17:16:37.032476  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1019 17:16:37.046900  219832 logs.go:123] Gathering logs for describe nodes ...
	I1019 17:16:37.046925  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1019 17:16:37.106332  219832 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1019 17:16:37.106353  219832 logs.go:123] Gathering logs for kube-apiserver [55181ab12a75f0828e8a499b545960e5dfe82c07024572a8d9b4566b0da1fba4] ...
	I1019 17:16:37.106370  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 55181ab12a75f0828e8a499b545960e5dfe82c07024572a8d9b4566b0da1fba4"
	I1019 17:16:37.141112  219832 logs.go:123] Gathering logs for kube-scheduler [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f] ...
	I1019 17:16:37.141142  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:16:37.198221  219832 logs.go:123] Gathering logs for kube-controller-manager [44a80a801bf4fae14b89098891aceb014a4308a5880e6bb47d1db7c41321104d] ...
	I1019 17:16:37.198254  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 44a80a801bf4fae14b89098891aceb014a4308a5880e6bb47d1db7c41321104d"
	I1019 17:16:37.235503  219832 logs.go:123] Gathering logs for CRI-O ...
	I1019 17:16:37.235527  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1019 17:16:38.046425  272363 kubeadm.go:884] updating cluster {Name:newest-cni-848035 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-848035 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 17:16:38.046594  272363 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:16:38.046679  272363 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:16:38.082448  272363 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:16:38.082473  272363 crio.go:433] Images already preloaded, skipping extraction
	I1019 17:16:38.082515  272363 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:16:38.112614  272363 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:16:38.112632  272363 cache_images.go:86] Images are preloaded, skipping loading
	I1019 17:16:38.112639  272363 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1019 17:16:38.112732  272363 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-848035 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-848035 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 17:16:38.112803  272363 ssh_runner.go:195] Run: crio config
	I1019 17:16:38.164220  272363 cni.go:84] Creating CNI manager for ""
	I1019 17:16:38.164247  272363 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:16:38.164265  272363 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1019 17:16:38.164294  272363 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-848035 NodeName:newest-cni-848035 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 17:16:38.164457  272363 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-848035"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 17:16:38.164529  272363 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 17:16:38.173498  272363 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 17:16:38.173549  272363 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 17:16:38.181591  272363 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1019 17:16:38.196205  272363 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 17:16:38.210033  272363 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1019 17:16:38.224305  272363 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1019 17:16:38.229006  272363 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:16:38.241325  272363 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:16:38.343442  272363 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:16:38.366970  272363 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/newest-cni-848035 for IP: 192.168.76.2
	I1019 17:16:38.367003  272363 certs.go:195] generating shared ca certs ...
	I1019 17:16:38.367025  272363 certs.go:227] acquiring lock for ca certs: {Name:mk4c4a2dfd94a54e3626e99fce4b6f5183eeaf4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:16:38.367205  272363 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-3731/.minikube/ca.key
	I1019 17:16:38.367262  272363 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-3731/.minikube/proxy-client-ca.key
	I1019 17:16:38.367275  272363 certs.go:257] generating profile certs ...
	I1019 17:16:38.367355  272363 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/newest-cni-848035/client.key
	I1019 17:16:38.367408  272363 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/newest-cni-848035/apiserver.key.facc7e69
	I1019 17:16:38.367448  272363 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/newest-cni-848035/proxy-client.key
	I1019 17:16:38.367603  272363 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/7228.pem (1338 bytes)
	W1019 17:16:38.367649  272363 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-3731/.minikube/certs/7228_empty.pem, impossibly tiny 0 bytes
	I1019 17:16:38.367663  272363 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca-key.pem (1675 bytes)
	I1019 17:16:38.367689  272363 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem (1082 bytes)
	I1019 17:16:38.367717  272363 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/cert.pem (1123 bytes)
	I1019 17:16:38.367750  272363 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/key.pem (1679 bytes)
	I1019 17:16:38.367794  272363 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/files/etc/ssl/certs/72282.pem (1708 bytes)
	I1019 17:16:38.368555  272363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 17:16:38.389844  272363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1019 17:16:38.416463  272363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 17:16:38.441481  272363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1019 17:16:38.468211  272363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/newest-cni-848035/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1019 17:16:38.490019  272363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/newest-cni-848035/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1019 17:16:38.508191  272363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/newest-cni-848035/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 17:16:38.528721  272363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/newest-cni-848035/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1019 17:16:38.548883  272363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/certs/7228.pem --> /usr/share/ca-certificates/7228.pem (1338 bytes)
	I1019 17:16:38.568140  272363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/files/etc/ssl/certs/72282.pem --> /usr/share/ca-certificates/72282.pem (1708 bytes)
	I1019 17:16:38.587376  272363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 17:16:38.615831  272363 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 17:16:38.633521  272363 ssh_runner.go:195] Run: openssl version
	I1019 17:16:38.640143  272363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7228.pem && ln -fs /usr/share/ca-certificates/7228.pem /etc/ssl/certs/7228.pem"
	I1019 17:16:38.650006  272363 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7228.pem
	I1019 17:16:38.654634  272363 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 16:26 /usr/share/ca-certificates/7228.pem
	I1019 17:16:38.654709  272363 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7228.pem
	I1019 17:16:38.693942  272363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7228.pem /etc/ssl/certs/51391683.0"
	I1019 17:16:38.703190  272363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/72282.pem && ln -fs /usr/share/ca-certificates/72282.pem /etc/ssl/certs/72282.pem"
	I1019 17:16:38.712169  272363 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/72282.pem
	I1019 17:16:38.716121  272363 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 16:26 /usr/share/ca-certificates/72282.pem
	I1019 17:16:38.716173  272363 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/72282.pem
	I1019 17:16:38.753613  272363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/72282.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 17:16:38.762997  272363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 17:16:38.773047  272363 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:16:38.777659  272363 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 16:21 /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:16:38.777718  272363 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:16:38.819882  272363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 17:16:38.828357  272363 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 17:16:38.832536  272363 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1019 17:16:38.871378  272363 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1019 17:16:38.920495  272363 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1019 17:16:38.971530  272363 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1019 17:16:39.034802  272363 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1019 17:16:39.099512  272363 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1019 17:16:39.169716  272363 kubeadm.go:401] StartCluster: {Name:newest-cni-848035 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-848035 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:16:39.169962  272363 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 17:16:39.170050  272363 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 17:16:39.217835  272363 cri.go:89] found id: "7161f4ea7f31214468dd438ccf92489be711cdfc8d6872eaa7921269b21b986f"
	I1019 17:16:39.217862  272363 cri.go:89] found id: "82ba1cda1f516989069b3e38d150e44948c2f2be79b66e33581f811e725c1136"
	I1019 17:16:39.217868  272363 cri.go:89] found id: "1c8d739ff68a706977ec10a4d83ed670ded08b67ebc1e618d401e2ecdfa2191e"
	I1019 17:16:39.217872  272363 cri.go:89] found id: "82a653d8de9363d72328ad5104900829cd8f26df51d681e55e0be8cc95ec3727"
	I1019 17:16:39.217876  272363 cri.go:89] found id: ""
	I1019 17:16:39.217923  272363 ssh_runner.go:195] Run: sudo runc list -f json
	W1019 17:16:39.233895  272363 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:16:39Z" level=error msg="open /run/runc: no such file or directory"
	I1019 17:16:39.233963  272363 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 17:16:39.245180  272363 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1019 17:16:39.245204  272363 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1019 17:16:39.245254  272363 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1019 17:16:39.255914  272363 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1019 17:16:39.257057  272363 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-848035" does not appear in /home/jenkins/minikube-integration/21683-3731/kubeconfig
	I1019 17:16:39.257876  272363 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-3731/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-848035" cluster setting kubeconfig missing "newest-cni-848035" context setting]
	I1019 17:16:39.258975  272363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/kubeconfig: {Name:mk4cdfafbafeae33568865a60cf929c656d55c5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:16:39.261010  272363 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1019 17:16:39.271996  272363 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1019 17:16:39.272139  272363 kubeadm.go:602] duration metric: took 26.818659ms to restartPrimaryControlPlane
	I1019 17:16:39.272162  272363 kubeadm.go:403] duration metric: took 102.450444ms to StartCluster
	I1019 17:16:39.272182  272363 settings.go:142] acquiring lock: {Name:mk205bd8d663b4a1850184d0f589700c0d2429b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:16:39.272253  272363 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-3731/kubeconfig
	I1019 17:16:39.274855  272363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/kubeconfig: {Name:mk4cdfafbafeae33568865a60cf929c656d55c5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:16:39.275442  272363 config.go:182] Loaded profile config "newest-cni-848035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:16:39.275528  272363 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 17:16:39.275568  272363 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 17:16:39.275712  272363 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-848035"
	I1019 17:16:39.275727  272363 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-848035"
	W1019 17:16:39.275735  272363 addons.go:248] addon storage-provisioner should already be in state true
	I1019 17:16:39.275736  272363 addons.go:70] Setting dashboard=true in profile "newest-cni-848035"
	I1019 17:16:39.275751  272363 addons.go:239] Setting addon dashboard=true in "newest-cni-848035"
	W1019 17:16:39.275758  272363 addons.go:248] addon dashboard should already be in state true
	I1019 17:16:39.275776  272363 host.go:66] Checking if "newest-cni-848035" exists ...
	I1019 17:16:39.275780  272363 host.go:66] Checking if "newest-cni-848035" exists ...
	I1019 17:16:39.276271  272363 cli_runner.go:164] Run: docker container inspect newest-cni-848035 --format={{.State.Status}}
	I1019 17:16:39.276271  272363 cli_runner.go:164] Run: docker container inspect newest-cni-848035 --format={{.State.Status}}
	I1019 17:16:39.276357  272363 addons.go:70] Setting default-storageclass=true in profile "newest-cni-848035"
	I1019 17:16:39.276384  272363 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-848035"
	I1019 17:16:39.276705  272363 cli_runner.go:164] Run: docker container inspect newest-cni-848035 --format={{.State.Status}}
	I1019 17:16:39.280987  272363 out.go:179] * Verifying Kubernetes components...
	I1019 17:16:39.283264  272363 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:16:39.305744  272363 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 17:16:39.305768  272363 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1019 17:16:39.308123  272363 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1019 17:16:34.550249  268862 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1019 17:16:34.555727  268862 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 17:16:34.555751  268862 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 17:16:35.049375  268862 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1019 17:16:35.053837  268862 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1019 17:16:35.054853  268862 api_server.go:141] control plane version: v1.34.1
	I1019 17:16:35.054877  268862 api_server.go:131] duration metric: took 1.00555724s to wait for apiserver health ...
	I1019 17:16:35.054886  268862 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 17:16:35.058519  268862 system_pods.go:59] 8 kube-system pods found
	I1019 17:16:35.058562  268862 system_pods.go:61] "coredns-66bc5c9577-zw7d8" [e1cb390d-b0bd-4da0-9e8a-92250e2485cf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:16:35.058573  268862 system_pods.go:61] "etcd-embed-certs-090139" [4082e3bc-d44c-4d23-83ab-6640758b2707] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 17:16:35.058584  268862 system_pods.go:61] "kindnet-dwsh7" [e081eba9-4c2c-401b-84d2-1bfdd53460e9] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1019 17:16:35.058590  268862 system_pods.go:61] "kube-apiserver-embed-certs-090139" [12c08735-a8cb-48a8-98ff-f464a4a93d5f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 17:16:35.058599  268862 system_pods.go:61] "kube-controller-manager-embed-certs-090139" [63f19dee-5d68-40ef-b15a-830203608d80] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 17:16:35.058605  268862 system_pods.go:61] "kube-proxy-8f4lh" [5baffb03-44e9-4304-a146-40598b517031] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1019 17:16:35.058612  268862 system_pods.go:61] "kube-scheduler-embed-certs-090139" [38e53961-5825-4991-8ee7-21f75edb86ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 17:16:35.058618  268862 system_pods.go:61] "storage-provisioner" [761c74ff-17e1-44c3-b64d-dd9c9f9863d0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 17:16:35.058624  268862 system_pods.go:74] duration metric: took 3.732419ms to wait for pod list to return data ...
	I1019 17:16:35.058634  268862 default_sa.go:34] waiting for default service account to be created ...
	I1019 17:16:35.060857  268862 default_sa.go:45] found service account: "default"
	I1019 17:16:35.060874  268862 default_sa.go:55] duration metric: took 2.235746ms for default service account to be created ...
	I1019 17:16:35.060881  268862 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 17:16:35.063423  268862 system_pods.go:86] 8 kube-system pods found
	I1019 17:16:35.063448  268862 system_pods.go:89] "coredns-66bc5c9577-zw7d8" [e1cb390d-b0bd-4da0-9e8a-92250e2485cf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:16:35.063455  268862 system_pods.go:89] "etcd-embed-certs-090139" [4082e3bc-d44c-4d23-83ab-6640758b2707] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 17:16:35.063462  268862 system_pods.go:89] "kindnet-dwsh7" [e081eba9-4c2c-401b-84d2-1bfdd53460e9] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1019 17:16:35.063471  268862 system_pods.go:89] "kube-apiserver-embed-certs-090139" [12c08735-a8cb-48a8-98ff-f464a4a93d5f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 17:16:35.063478  268862 system_pods.go:89] "kube-controller-manager-embed-certs-090139" [63f19dee-5d68-40ef-b15a-830203608d80] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 17:16:35.063488  268862 system_pods.go:89] "kube-proxy-8f4lh" [5baffb03-44e9-4304-a146-40598b517031] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1019 17:16:35.063497  268862 system_pods.go:89] "kube-scheduler-embed-certs-090139" [38e53961-5825-4991-8ee7-21f75edb86ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 17:16:35.063505  268862 system_pods.go:89] "storage-provisioner" [761c74ff-17e1-44c3-b64d-dd9c9f9863d0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 17:16:35.063516  268862 system_pods.go:126] duration metric: took 2.629242ms to wait for k8s-apps to be running ...
	I1019 17:16:35.063524  268862 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 17:16:35.063563  268862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:16:35.076395  268862 system_svc.go:56] duration metric: took 12.863749ms WaitForService to wait for kubelet
	I1019 17:16:35.076420  268862 kubeadm.go:587] duration metric: took 3.228442809s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 17:16:35.076437  268862 node_conditions.go:102] verifying NodePressure condition ...
	I1019 17:16:35.079642  268862 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1019 17:16:35.079669  268862 node_conditions.go:123] node cpu capacity is 8
	I1019 17:16:35.079684  268862 node_conditions.go:105] duration metric: took 3.235964ms to run NodePressure ...
	I1019 17:16:35.079698  268862 start.go:242] waiting for startup goroutines ...
	I1019 17:16:35.079712  268862 start.go:247] waiting for cluster config update ...
	I1019 17:16:35.079724  268862 start.go:256] writing updated cluster config ...
	I1019 17:16:35.080036  268862 ssh_runner.go:195] Run: rm -f paused
	I1019 17:16:35.083786  268862 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 17:16:35.087316  268862 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zw7d8" in "kube-system" namespace to be "Ready" or be gone ...
	W1019 17:16:37.094187  268862 pod_ready.go:104] pod "coredns-66bc5c9577-zw7d8" is not "Ready", error: <nil>
	W1019 17:16:39.105580  268862 pod_ready.go:104] pod "coredns-66bc5c9577-zw7d8" is not "Ready", error: <nil>
	I1019 17:16:39.308726  272363 addons.go:239] Setting addon default-storageclass=true in "newest-cni-848035"
	W1019 17:16:39.308753  272363 addons.go:248] addon default-storageclass should already be in state true
	I1019 17:16:39.308782  272363 host.go:66] Checking if "newest-cni-848035" exists ...
	I1019 17:16:39.309191  272363 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:16:39.309210  272363 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 17:16:39.309351  272363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-848035
	I1019 17:16:39.309359  272363 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1019 17:16:39.309373  272363 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1019 17:16:39.309432  272363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-848035
	I1019 17:16:39.309712  272363 cli_runner.go:164] Run: docker container inspect newest-cni-848035 --format={{.State.Status}}
	I1019 17:16:39.343585  272363 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 17:16:39.343613  272363 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 17:16:39.343675  272363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-848035
	I1019 17:16:39.344043  272363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/newest-cni-848035/id_rsa Username:docker}
	I1019 17:16:39.345230  272363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/newest-cni-848035/id_rsa Username:docker}
	I1019 17:16:39.372756  272363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/newest-cni-848035/id_rsa Username:docker}
	I1019 17:16:39.471918  272363 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:16:39.480326  272363 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1019 17:16:39.480419  272363 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1019 17:16:39.480453  272363 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:16:39.506213  272363 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 17:16:39.513765  272363 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1019 17:16:39.513796  272363 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1019 17:16:39.537709  272363 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1019 17:16:39.537736  272363 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1019 17:16:39.556979  272363 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1019 17:16:39.557004  272363 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1019 17:16:39.578503  272363 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1019 17:16:39.578529  272363 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1019 17:16:39.599401  272363 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1019 17:16:39.599437  272363 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1019 17:16:39.616578  272363 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1019 17:16:39.616607  272363 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1019 17:16:39.630908  272363 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1019 17:16:39.630950  272363 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1019 17:16:39.644651  272363 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1019 17:16:39.644678  272363 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1019 17:16:39.660847  272363 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1019 17:16:41.672703  272363 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.20068394s)
	I1019 17:16:41.672780  272363 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.192300061s)
	I1019 17:16:41.672821  272363 api_server.go:52] waiting for apiserver process to appear ...
	I1019 17:16:41.672878  272363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 17:16:41.673187  272363 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.16688659s)
	I1019 17:16:41.673668  272363 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.012779902s)
	I1019 17:16:41.676143  272363 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-848035 addons enable metrics-server
	
	I1019 17:16:41.689805  272363 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1019 17:16:41.691507  272363 addons.go:515] duration metric: took 2.415945834s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1019 17:16:41.692321  272363 api_server.go:72] duration metric: took 2.416731792s to wait for apiserver process to appear ...
	I1019 17:16:41.692341  272363 api_server.go:88] waiting for apiserver healthz status ...
	I1019 17:16:41.692359  272363 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1019 17:16:41.697526  272363 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 17:16:41.697550  272363 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 17:16:42.193291  272363 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1019 17:16:42.198684  272363 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1019 17:16:42.199865  272363 api_server.go:141] control plane version: v1.34.1
	I1019 17:16:42.199892  272363 api_server.go:131] duration metric: took 507.542957ms to wait for apiserver health ...
	I1019 17:16:42.199903  272363 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 17:16:42.203562  272363 system_pods.go:59] 8 kube-system pods found
	I1019 17:16:42.203609  272363 system_pods.go:61] "coredns-66bc5c9577-4r958" [c909784b-62ef-4de8-8c71-0fdb70321fab] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1019 17:16:42.203630  272363 system_pods.go:61] "etcd-newest-cni-848035" [d4f66958-0c51-495b-9982-fbc5fa2eaf5c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 17:16:42.203643  272363 system_pods.go:61] "kindnet-cldtb" [3371a4c2-e7be-4f7c-9e77-69cce40a6458] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1019 17:16:42.203654  272363 system_pods.go:61] "kube-apiserver-newest-cni-848035" [0381a31e-6a3a-48c8-acb0-905da2c9e2c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 17:16:42.203668  272363 system_pods.go:61] "kube-controller-manager-newest-cni-848035" [26b369eb-d2b2-488b-afd2-8958a5a8f955] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 17:16:42.203681  272363 system_pods.go:61] "kube-proxy-4xgrb" [f332f5bc-a940-414b-816d-fe262c303b5a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1019 17:16:42.203690  272363 system_pods.go:61] "kube-scheduler-newest-cni-848035" [22b426e0-aafb-4a62-9535-894b75da5f59] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 17:16:42.203697  272363 system_pods.go:61] "storage-provisioner" [b4254e2b-7d6f-4957-a6c4-81ea56715968] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1019 17:16:42.203704  272363 system_pods.go:74] duration metric: took 3.794877ms to wait for pod list to return data ...
	I1019 17:16:42.203714  272363 default_sa.go:34] waiting for default service account to be created ...
	I1019 17:16:42.206443  272363 default_sa.go:45] found service account: "default"
	I1019 17:16:42.206466  272363 default_sa.go:55] duration metric: took 2.745259ms for default service account to be created ...
	I1019 17:16:42.206479  272363 kubeadm.go:587] duration metric: took 2.930891741s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1019 17:16:42.206501  272363 node_conditions.go:102] verifying NodePressure condition ...
	I1019 17:16:42.209663  272363 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1019 17:16:42.209692  272363 node_conditions.go:123] node cpu capacity is 8
	I1019 17:16:42.209707  272363 node_conditions.go:105] duration metric: took 3.199964ms to run NodePressure ...
	I1019 17:16:42.209721  272363 start.go:242] waiting for startup goroutines ...
	I1019 17:16:42.209745  272363 start.go:247] waiting for cluster config update ...
	I1019 17:16:42.209759  272363 start.go:256] writing updated cluster config ...
	I1019 17:16:42.210186  272363 ssh_runner.go:195] Run: rm -f paused
	I1019 17:16:42.281420  272363 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1019 17:16:42.284529  272363 out.go:179] * Done! kubectl is now configured to use "newest-cni-848035" cluster and "default" namespace by default
	I1019 17:16:38.497872  274481 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-663015" ...
	I1019 17:16:38.497943  274481 cli_runner.go:164] Run: docker start default-k8s-diff-port-663015
	I1019 17:16:38.754432  274481 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-663015 --format={{.State.Status}}
	I1019 17:16:38.775319  274481 kic.go:430] container "default-k8s-diff-port-663015" state is running.
	I1019 17:16:38.775801  274481 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-663015
	I1019 17:16:38.796607  274481 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/config.json ...
	I1019 17:16:38.796868  274481 machine.go:94] provisionDockerMachine start ...
	I1019 17:16:38.796955  274481 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-663015
	I1019 17:16:38.817427  274481 main.go:143] libmachine: Using SSH client type: native
	I1019 17:16:38.817764  274481 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I1019 17:16:38.817786  274481 main.go:143] libmachine: About to run SSH command:
	hostname
	I1019 17:16:38.818415  274481 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56424->127.0.0.1:33099: read: connection reset by peer
	I1019 17:16:41.968235  274481 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-663015
	
	I1019 17:16:41.968269  274481 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-663015"
	I1019 17:16:41.968336  274481 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-663015
	I1019 17:16:41.995630  274481 main.go:143] libmachine: Using SSH client type: native
	I1019 17:16:41.995938  274481 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I1019 17:16:41.995968  274481 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-663015 && echo "default-k8s-diff-port-663015" | sudo tee /etc/hostname
	I1019 17:16:42.165798  274481 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-663015
	
	I1019 17:16:42.165916  274481 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-663015
	I1019 17:16:42.192141  274481 main.go:143] libmachine: Using SSH client type: native
	I1019 17:16:42.192575  274481 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I1019 17:16:42.192617  274481 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-663015' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-663015/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-663015' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 17:16:42.347247  274481 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1019 17:16:42.347335  274481 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-3731/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-3731/.minikube}
	I1019 17:16:42.347378  274481 ubuntu.go:190] setting up certificates
	I1019 17:16:42.347392  274481 provision.go:84] configureAuth start
	I1019 17:16:42.347449  274481 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-663015
	I1019 17:16:42.370961  274481 provision.go:143] copyHostCerts
	I1019 17:16:42.371034  274481 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-3731/.minikube/ca.pem, removing ...
	I1019 17:16:42.371052  274481 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-3731/.minikube/ca.pem
	I1019 17:16:42.371141  274481 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-3731/.minikube/ca.pem (1082 bytes)
	I1019 17:16:42.371290  274481 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-3731/.minikube/cert.pem, removing ...
	I1019 17:16:42.371306  274481 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-3731/.minikube/cert.pem
	I1019 17:16:42.371351  274481 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-3731/.minikube/cert.pem (1123 bytes)
	I1019 17:16:42.371437  274481 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-3731/.minikube/key.pem, removing ...
	I1019 17:16:42.371450  274481 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-3731/.minikube/key.pem
	I1019 17:16:42.371487  274481 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-3731/.minikube/key.pem (1679 bytes)
	I1019 17:16:42.371561  274481 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-3731/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-663015 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-663015 localhost minikube]
	I1019 17:16:42.431792  274481 provision.go:177] copyRemoteCerts
	I1019 17:16:42.431880  274481 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 17:16:42.431924  274481 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-663015
	I1019 17:16:42.453388  274481 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/default-k8s-diff-port-663015/id_rsa Username:docker}
	I1019 17:16:42.559643  274481 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 17:16:42.580943  274481 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1019 17:16:42.607817  274481 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1019 17:16:42.630023  274481 provision.go:87] duration metric: took 282.6201ms to configureAuth
	I1019 17:16:42.630047  274481 ubuntu.go:206] setting minikube options for container-runtime
	I1019 17:16:42.630243  274481 config.go:182] Loaded profile config "default-k8s-diff-port-663015": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:16:42.630362  274481 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-663015
	I1019 17:16:42.650443  274481 main.go:143] libmachine: Using SSH client type: native
	I1019 17:16:42.650784  274481 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I1019 17:16:42.650815  274481 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 17:16:39.790368  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:16:39.790807  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1019 17:16:39.790874  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1019 17:16:39.790928  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1019 17:16:39.829893  219832 cri.go:89] found id: "55181ab12a75f0828e8a499b545960e5dfe82c07024572a8d9b4566b0da1fba4"
	I1019 17:16:39.829917  219832 cri.go:89] found id: ""
	I1019 17:16:39.829927  219832 logs.go:282] 1 containers: [55181ab12a75f0828e8a499b545960e5dfe82c07024572a8d9b4566b0da1fba4]
	I1019 17:16:39.829980  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:16:39.834841  219832 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1019 17:16:39.834909  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1019 17:16:39.875492  219832 cri.go:89] found id: ""
	I1019 17:16:39.875518  219832 logs.go:282] 0 containers: []
	W1019 17:16:39.875528  219832 logs.go:284] No container was found matching "etcd"
	I1019 17:16:39.875535  219832 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1019 17:16:39.875589  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1019 17:16:39.911131  219832 cri.go:89] found id: ""
	I1019 17:16:39.911158  219832 logs.go:282] 0 containers: []
	W1019 17:16:39.911169  219832 logs.go:284] No container was found matching "coredns"
	I1019 17:16:39.911181  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1019 17:16:39.911241  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1019 17:16:39.950100  219832 cri.go:89] found id: "409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:16:39.950126  219832 cri.go:89] found id: ""
	I1019 17:16:39.950136  219832 logs.go:282] 1 containers: [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f]
	I1019 17:16:39.950192  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:16:39.955475  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1019 17:16:39.955543  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1019 17:16:39.990249  219832 cri.go:89] found id: ""
	I1019 17:16:39.990281  219832 logs.go:282] 0 containers: []
	W1019 17:16:39.990291  219832 logs.go:284] No container was found matching "kube-proxy"
	I1019 17:16:39.990299  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1019 17:16:39.990352  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1019 17:16:40.023216  219832 cri.go:89] found id: "44a80a801bf4fae14b89098891aceb014a4308a5880e6bb47d1db7c41321104d"
	I1019 17:16:40.023240  219832 cri.go:89] found id: ""
	I1019 17:16:40.023250  219832 logs.go:282] 1 containers: [44a80a801bf4fae14b89098891aceb014a4308a5880e6bb47d1db7c41321104d]
	I1019 17:16:40.023308  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:16:40.027512  219832 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1019 17:16:40.027576  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1019 17:16:40.061930  219832 cri.go:89] found id: ""
	I1019 17:16:40.061965  219832 logs.go:282] 0 containers: []
	W1019 17:16:40.061973  219832 logs.go:284] No container was found matching "kindnet"
	I1019 17:16:40.061978  219832 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1019 17:16:40.062024  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1019 17:16:40.096738  219832 cri.go:89] found id: ""
	I1019 17:16:40.096772  219832 logs.go:282] 0 containers: []
	W1019 17:16:40.096787  219832 logs.go:284] No container was found matching "storage-provisioner"
	I1019 17:16:40.096798  219832 logs.go:123] Gathering logs for kubelet ...
	I1019 17:16:40.097021  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1019 17:16:40.250474  219832 logs.go:123] Gathering logs for dmesg ...
	I1019 17:16:40.250525  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1019 17:16:40.272407  219832 logs.go:123] Gathering logs for describe nodes ...
	I1019 17:16:40.272446  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1019 17:16:40.363698  219832 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1019 17:16:40.363724  219832 logs.go:123] Gathering logs for kube-apiserver [55181ab12a75f0828e8a499b545960e5dfe82c07024572a8d9b4566b0da1fba4] ...
	I1019 17:16:40.363742  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 55181ab12a75f0828e8a499b545960e5dfe82c07024572a8d9b4566b0da1fba4"
	I1019 17:16:40.412778  219832 logs.go:123] Gathering logs for kube-scheduler [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f] ...
	I1019 17:16:40.412825  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:16:40.504759  219832 logs.go:123] Gathering logs for kube-controller-manager [44a80a801bf4fae14b89098891aceb014a4308a5880e6bb47d1db7c41321104d] ...
	I1019 17:16:40.505124  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 44a80a801bf4fae14b89098891aceb014a4308a5880e6bb47d1db7c41321104d"
	I1019 17:16:40.556041  219832 logs.go:123] Gathering logs for CRI-O ...
	I1019 17:16:40.556086  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1019 17:16:40.631963  219832 logs.go:123] Gathering logs for container status ...
	I1019 17:16:40.632012  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1019 17:16:43.169211  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:16:43.169777  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1019 17:16:43.169946  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1019 17:16:43.170005  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1019 17:16:43.212032  219832 cri.go:89] found id: "55181ab12a75f0828e8a499b545960e5dfe82c07024572a8d9b4566b0da1fba4"
	I1019 17:16:43.212057  219832 cri.go:89] found id: ""
	I1019 17:16:43.212091  219832 logs.go:282] 1 containers: [55181ab12a75f0828e8a499b545960e5dfe82c07024572a8d9b4566b0da1fba4]
	I1019 17:16:43.212147  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:16:43.217841  219832 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1019 17:16:43.217913  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1019 17:16:43.252451  219832 cri.go:89] found id: ""
	I1019 17:16:43.252479  219832 logs.go:282] 0 containers: []
	W1019 17:16:43.252491  219832 logs.go:284] No container was found matching "etcd"
	I1019 17:16:43.252499  219832 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1019 17:16:43.252576  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1019 17:16:43.293365  219832 cri.go:89] found id: ""
	I1019 17:16:43.293404  219832 logs.go:282] 0 containers: []
	W1019 17:16:43.293416  219832 logs.go:284] No container was found matching "coredns"
	I1019 17:16:43.293423  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1019 17:16:43.293481  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1019 17:16:43.325984  219832 cri.go:89] found id: "409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:16:43.326016  219832 cri.go:89] found id: ""
	I1019 17:16:43.326026  219832 logs.go:282] 1 containers: [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f]
	I1019 17:16:43.326110  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:16:43.329938  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1019 17:16:43.330005  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1019 17:16:43.370926  219832 cri.go:89] found id: ""
	I1019 17:16:43.370955  219832 logs.go:282] 0 containers: []
	W1019 17:16:43.370965  219832 logs.go:284] No container was found matching "kube-proxy"
	I1019 17:16:43.370974  219832 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1019 17:16:43.371057  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1019 17:16:43.411453  219832 cri.go:89] found id: "44a80a801bf4fae14b89098891aceb014a4308a5880e6bb47d1db7c41321104d"
	I1019 17:16:43.411478  219832 cri.go:89] found id: ""
	I1019 17:16:43.411489  219832 logs.go:282] 1 containers: [44a80a801bf4fae14b89098891aceb014a4308a5880e6bb47d1db7c41321104d]
	I1019 17:16:43.411541  219832 ssh_runner.go:195] Run: which crictl
	I1019 17:16:43.415693  219832 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1019 17:16:43.415748  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1019 17:16:43.449970  219832 cri.go:89] found id: ""
	I1019 17:16:43.449998  219832 logs.go:282] 0 containers: []
	W1019 17:16:43.450007  219832 logs.go:284] No container was found matching "kindnet"
	I1019 17:16:43.450014  219832 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1019 17:16:43.450093  219832 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1019 17:16:43.490308  219832 cri.go:89] found id: ""
	I1019 17:16:43.490334  219832 logs.go:282] 0 containers: []
	W1019 17:16:43.490343  219832 logs.go:284] No container was found matching "storage-provisioner"
	I1019 17:16:43.490353  219832 logs.go:123] Gathering logs for kube-apiserver [55181ab12a75f0828e8a499b545960e5dfe82c07024572a8d9b4566b0da1fba4] ...
	I1019 17:16:43.490368  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 55181ab12a75f0828e8a499b545960e5dfe82c07024572a8d9b4566b0da1fba4"
	W1019 17:16:41.594037  268862 pod_ready.go:104] pod "coredns-66bc5c9577-zw7d8" is not "Ready", error: <nil>
	W1019 17:16:44.095881  268862 pod_ready.go:104] pod "coredns-66bc5c9577-zw7d8" is not "Ready", error: <nil>
	I1019 17:16:43.758624  274481 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 17:16:43.758653  274481 machine.go:97] duration metric: took 4.961766704s to provisionDockerMachine
	I1019 17:16:43.758666  274481 start.go:293] postStartSetup for "default-k8s-diff-port-663015" (driver="docker")
	I1019 17:16:43.758681  274481 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 17:16:43.758749  274481 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 17:16:43.758801  274481 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-663015
	I1019 17:16:43.782192  274481 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/default-k8s-diff-port-663015/id_rsa Username:docker}
	I1019 17:16:43.891871  274481 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 17:16:43.896719  274481 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 17:16:43.896841  274481 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 17:16:43.896857  274481 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-3731/.minikube/addons for local assets ...
	I1019 17:16:43.896924  274481 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-3731/.minikube/files for local assets ...
	I1019 17:16:43.897077  274481 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-3731/.minikube/files/etc/ssl/certs/72282.pem -> 72282.pem in /etc/ssl/certs
	I1019 17:16:43.897199  274481 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 17:16:43.908894  274481 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/files/etc/ssl/certs/72282.pem --> /etc/ssl/certs/72282.pem (1708 bytes)
	I1019 17:16:43.947756  274481 start.go:296] duration metric: took 189.073166ms for postStartSetup
	I1019 17:16:43.947876  274481 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 17:16:43.947926  274481 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-663015
	I1019 17:16:43.975271  274481 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/default-k8s-diff-port-663015/id_rsa Username:docker}
	I1019 17:16:44.078918  274481 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 17:16:44.085274  274481 fix.go:56] duration metric: took 5.60935599s for fixHost
	I1019 17:16:44.085304  274481 start.go:83] releasing machines lock for "default-k8s-diff-port-663015", held for 5.609428848s
	I1019 17:16:44.085379  274481 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-663015
	I1019 17:16:44.110831  274481 ssh_runner.go:195] Run: cat /version.json
	I1019 17:16:44.110908  274481 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-663015
	I1019 17:16:44.111015  274481 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 17:16:44.111092  274481 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-663015
	I1019 17:16:44.135271  274481 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/default-k8s-diff-port-663015/id_rsa Username:docker}
	I1019 17:16:44.135750  274481 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/default-k8s-diff-port-663015/id_rsa Username:docker}
	I1019 17:16:44.238339  274481 ssh_runner.go:195] Run: systemctl --version
	I1019 17:16:44.320379  274481 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 17:16:44.367371  274481 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 17:16:44.373006  274481 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 17:16:44.373087  274481 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 17:16:44.383516  274481 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1019 17:16:44.383542  274481 start.go:496] detecting cgroup driver to use...
	I1019 17:16:44.383576  274481 detect.go:190] detected "systemd" cgroup driver on host os
	I1019 17:16:44.383629  274481 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 17:16:44.405372  274481 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 17:16:44.422022  274481 docker.go:218] disabling cri-docker service (if available) ...
	I1019 17:16:44.422114  274481 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 17:16:44.442186  274481 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 17:16:44.459354  274481 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 17:16:44.584124  274481 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 17:16:44.698334  274481 docker.go:234] disabling docker service ...
	I1019 17:16:44.698410  274481 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 17:16:44.717935  274481 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 17:16:44.735606  274481 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 17:16:44.856239  274481 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 17:16:44.960294  274481 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 17:16:44.974250  274481 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 17:16:44.989696  274481 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 17:16:44.989782  274481 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:16:44.999654  274481 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1019 17:16:44.999734  274481 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:16:45.014466  274481 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:16:45.026758  274481 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:16:45.039239  274481 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 17:16:45.048888  274481 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:16:45.059352  274481 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:16:45.068539  274481 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:16:45.078630  274481 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 17:16:45.087523  274481 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 17:16:45.095914  274481 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:16:45.182148  274481 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 17:16:45.673736  274481 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 17:16:45.673813  274481 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 17:16:45.678492  274481 start.go:564] Will wait 60s for crictl version
	I1019 17:16:45.678558  274481 ssh_runner.go:195] Run: which crictl
	I1019 17:16:45.682858  274481 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 17:16:45.709000  274481 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 17:16:45.709135  274481 ssh_runner.go:195] Run: crio --version
	I1019 17:16:45.737974  274481 ssh_runner.go:195] Run: crio --version
	I1019 17:16:45.767420  274481 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1019 17:16:45.768700  274481 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-663015 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 17:16:45.786689  274481 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1019 17:16:45.790715  274481 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:16:45.801189  274481 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-663015 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-663015 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 17:16:45.801335  274481 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:16:45.801392  274481 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:16:45.840155  274481 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:16:45.840184  274481 crio.go:433] Images already preloaded, skipping extraction
	I1019 17:16:45.840242  274481 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:16:45.869567  274481 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:16:45.869593  274481 cache_images.go:86] Images are preloaded, skipping loading
	I1019 17:16:45.869601  274481 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1019 17:16:45.869721  274481 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-663015 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-663015 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 17:16:45.869810  274481 ssh_runner.go:195] Run: crio config
	I1019 17:16:45.919133  274481 cni.go:84] Creating CNI manager for ""
	I1019 17:16:45.919152  274481 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:16:45.919167  274481 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 17:16:45.919197  274481 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-663015 NodeName:default-k8s-diff-port-663015 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 17:16:45.919305  274481 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-663015"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 17:16:45.919362  274481 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 17:16:45.928122  274481 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 17:16:45.928187  274481 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 17:16:45.937603  274481 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1019 17:16:45.951631  274481 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 17:16:45.965182  274481 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1019 17:16:45.982352  274481 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1019 17:16:45.986503  274481 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:16:45.997331  274481 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:16:46.100061  274481 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:16:46.130508  274481 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015 for IP: 192.168.85.2
	I1019 17:16:46.130533  274481 certs.go:195] generating shared ca certs ...
	I1019 17:16:46.130551  274481 certs.go:227] acquiring lock for ca certs: {Name:mk4c4a2dfd94a54e3626e99fce4b6f5183eeaf4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:16:46.130721  274481 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-3731/.minikube/ca.key
	I1019 17:16:46.130795  274481 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-3731/.minikube/proxy-client-ca.key
	I1019 17:16:46.130812  274481 certs.go:257] generating profile certs ...
	I1019 17:16:46.130917  274481 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/client.key
	I1019 17:16:46.130998  274481 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/apiserver.key.d3e891db
	I1019 17:16:46.131052  274481 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/proxy-client.key
	I1019 17:16:46.131212  274481 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/7228.pem (1338 bytes)
	W1019 17:16:46.131252  274481 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-3731/.minikube/certs/7228_empty.pem, impossibly tiny 0 bytes
	I1019 17:16:46.131264  274481 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca-key.pem (1675 bytes)
	I1019 17:16:46.131291  274481 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem (1082 bytes)
	I1019 17:16:46.131322  274481 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/cert.pem (1123 bytes)
	I1019 17:16:46.131359  274481 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/key.pem (1679 bytes)
	I1019 17:16:46.131414  274481 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/files/etc/ssl/certs/72282.pem (1708 bytes)
	I1019 17:16:46.132221  274481 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 17:16:46.153641  274481 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1019 17:16:46.173259  274481 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 17:16:46.193341  274481 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1019 17:16:46.220552  274481 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1019 17:16:46.244833  274481 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1019 17:16:46.265626  274481 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 17:16:46.288488  274481 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/default-k8s-diff-port-663015/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1019 17:16:46.309723  274481 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 17:16:46.330105  274481 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/certs/7228.pem --> /usr/share/ca-certificates/7228.pem (1338 bytes)
	I1019 17:16:46.349871  274481 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/files/etc/ssl/certs/72282.pem --> /usr/share/ca-certificates/72282.pem (1708 bytes)
	I1019 17:16:46.369194  274481 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 17:16:46.383048  274481 ssh_runner.go:195] Run: openssl version
	I1019 17:16:46.389492  274481 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 17:16:46.399919  274481 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:16:46.404384  274481 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 16:21 /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:16:46.404436  274481 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:16:46.444977  274481 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 17:16:46.453556  274481 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7228.pem && ln -fs /usr/share/ca-certificates/7228.pem /etc/ssl/certs/7228.pem"
	I1019 17:16:46.463659  274481 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7228.pem
	I1019 17:16:46.467937  274481 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 16:26 /usr/share/ca-certificates/7228.pem
	I1019 17:16:46.467982  274481 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7228.pem
	I1019 17:16:46.508106  274481 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7228.pem /etc/ssl/certs/51391683.0"
	I1019 17:16:46.518490  274481 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/72282.pem && ln -fs /usr/share/ca-certificates/72282.pem /etc/ssl/certs/72282.pem"
	I1019 17:16:46.528893  274481 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/72282.pem
	I1019 17:16:46.533288  274481 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 16:26 /usr/share/ca-certificates/72282.pem
	I1019 17:16:46.533351  274481 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/72282.pem
	I1019 17:16:46.572139  274481 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/72282.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 17:16:46.580931  274481 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 17:16:46.585405  274481 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1019 17:16:46.623987  274481 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1019 17:16:46.676291  274481 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1019 17:16:46.730017  274481 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1019 17:16:46.789920  274481 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1019 17:16:46.849791  274481 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1019 17:16:46.894802  274481 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-663015 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-663015 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:16:46.894920  274481 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 17:16:46.894981  274481 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 17:16:46.930721  274481 cri.go:89] found id: "6f5702f98db02fecf8ffffae08c89809549267cd099ea38ec1f43f04d2849238"
	I1019 17:16:46.930754  274481 cri.go:89] found id: "0198767b0edb6f90348a6cb47c20f3c0c5d712ddfcdc06a79eb89a2396dc856b"
	I1019 17:16:46.930759  274481 cri.go:89] found id: "98c96714927741271a866cf42303c32a2f1bcbff5d4fcfbf3eb2a3e8d6e376c1"
	I1019 17:16:46.930764  274481 cri.go:89] found id: "79c3046dfcac29d78ffef04f805bf4024716c53ca40c15dca8f18dfd42988854"
	I1019 17:16:46.930768  274481 cri.go:89] found id: ""
	I1019 17:16:46.930814  274481 ssh_runner.go:195] Run: sudo runc list -f json
	W1019 17:16:46.945801  274481 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:16:46Z" level=error msg="open /run/runc: no such file or directory"
	I1019 17:16:46.945873  274481 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 17:16:46.955297  274481 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1019 17:16:46.955323  274481 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1019 17:16:46.955370  274481 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1019 17:16:46.963476  274481 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1019 17:16:46.964418  274481 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-663015" does not appear in /home/jenkins/minikube-integration/21683-3731/kubeconfig
	I1019 17:16:46.965355  274481 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-3731/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-663015" cluster setting kubeconfig missing "default-k8s-diff-port-663015" context setting]
	I1019 17:16:46.966402  274481 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/kubeconfig: {Name:mk4cdfafbafeae33568865a60cf929c656d55c5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:16:46.968893  274481 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1019 17:16:46.979042  274481 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1019 17:16:46.979147  274481 kubeadm.go:602] duration metric: took 23.815767ms to restartPrimaryControlPlane
	I1019 17:16:46.979159  274481 kubeadm.go:403] duration metric: took 84.368282ms to StartCluster
	I1019 17:16:46.979190  274481 settings.go:142] acquiring lock: {Name:mk205bd8d663b4a1850184d0f589700c0d2429b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:16:46.979258  274481 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-3731/kubeconfig
	I1019 17:16:46.981568  274481 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/kubeconfig: {Name:mk4cdfafbafeae33568865a60cf929c656d55c5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:16:46.981831  274481 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 17:16:46.981902  274481 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 17:16:46.982001  274481 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-663015"
	I1019 17:16:46.982018  274481 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-663015"
	W1019 17:16:46.982026  274481 addons.go:248] addon storage-provisioner should already be in state true
	I1019 17:16:46.982034  274481 config.go:182] Loaded profile config "default-k8s-diff-port-663015": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:16:46.982047  274481 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-663015"
	I1019 17:16:46.982054  274481 host.go:66] Checking if "default-k8s-diff-port-663015" exists ...
	I1019 17:16:46.982061  274481 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-663015"
	W1019 17:16:46.982087  274481 addons.go:248] addon dashboard should already be in state true
	I1019 17:16:46.982116  274481 host.go:66] Checking if "default-k8s-diff-port-663015" exists ...
	I1019 17:16:46.982132  274481 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-663015"
	I1019 17:16:46.982148  274481 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-663015"
	I1019 17:16:46.982435  274481 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-663015 --format={{.State.Status}}
	I1019 17:16:46.982590  274481 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-663015 --format={{.State.Status}}
	I1019 17:16:46.982590  274481 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-663015 --format={{.State.Status}}
	I1019 17:16:46.985484  274481 out.go:179] * Verifying Kubernetes components...
	I1019 17:16:46.988200  274481 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:16:47.012891  274481 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 17:16:47.014280  274481 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:16:47.014336  274481 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 17:16:47.014423  274481 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-663015
	I1019 17:16:47.017214  274481 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1019 17:16:47.018945  274481 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1019 17:16:43.531547  219832 logs.go:123] Gathering logs for kube-scheduler [409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f] ...
	I1019 17:16:43.531588  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 409e034acfad743a1d17121dd6bfc970e488e4da51c9baf8a109bdb0cfbfea6f"
	I1019 17:16:43.600156  219832 logs.go:123] Gathering logs for kube-controller-manager [44a80a801bf4fae14b89098891aceb014a4308a5880e6bb47d1db7c41321104d] ...
	I1019 17:16:43.600192  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 44a80a801bf4fae14b89098891aceb014a4308a5880e6bb47d1db7c41321104d"
	I1019 17:16:43.629004  219832 logs.go:123] Gathering logs for CRI-O ...
	I1019 17:16:43.629031  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1019 17:16:43.686175  219832 logs.go:123] Gathering logs for container status ...
	I1019 17:16:43.686206  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1019 17:16:43.725700  219832 logs.go:123] Gathering logs for kubelet ...
	I1019 17:16:43.725733  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1019 17:16:43.868758  219832 logs.go:123] Gathering logs for dmesg ...
	I1019 17:16:43.868794  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1019 17:16:43.888458  219832 logs.go:123] Gathering logs for describe nodes ...
	I1019 17:16:43.888496  219832 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1019 17:16:43.982147  219832 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1019 17:16:46.483146  219832 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 17:16:46.483567  219832 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1019 17:16:46.483632  219832 kubeadm.go:602] duration metric: took 4m4.338164829s to restartPrimaryControlPlane
	W1019 17:16:46.483701  219832 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1019 17:16:46.483761  219832 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1019 17:16:47.127912  219832 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:16:47.147654  219832 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1019 17:16:47.161051  219832 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1019 17:16:47.161207  219832 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1019 17:16:47.172905  219832 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1019 17:16:47.172923  219832 kubeadm.go:158] found existing configuration files:
	
	I1019 17:16:47.172967  219832 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1019 17:16:47.185682  219832 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1019 17:16:47.185759  219832 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1019 17:16:47.196805  219832 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1019 17:16:47.207579  219832 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1019 17:16:47.207646  219832 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1019 17:16:47.222348  219832 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1019 17:16:47.235957  219832 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1019 17:16:47.236131  219832 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1019 17:16:47.251729  219832 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1019 17:16:47.264526  219832 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1019 17:16:47.264593  219832 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1019 17:16:47.274927  219832 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1019 17:16:47.324750  219832 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1019 17:16:47.324833  219832 kubeadm.go:319] [preflight] Running pre-flight checks
	I1019 17:16:47.356410  219832 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1019 17:16:47.356497  219832 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1041-gcp
	I1019 17:16:47.356543  219832 kubeadm.go:319] OS: Linux
	I1019 17:16:47.356602  219832 kubeadm.go:319] CGROUPS_CPU: enabled
	I1019 17:16:47.356663  219832 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1019 17:16:47.356831  219832 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1019 17:16:47.356945  219832 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1019 17:16:47.357094  219832 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1019 17:16:47.357158  219832 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1019 17:16:47.357222  219832 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1019 17:16:47.357280  219832 kubeadm.go:319] CGROUPS_IO: enabled
	I1019 17:16:47.452328  219832 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1019 17:16:47.452554  219832 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1019 17:16:47.452768  219832 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1019 17:16:47.462371  219832 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	
	
	==> CRI-O <==
	Oct 19 17:16:41 newest-cni-848035 crio[514]: time="2025-10-19T17:16:41.762215848Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-4xgrb/POD" id=05cf64d1-57fe-473f-9b26-dfa561cd1b7d name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 17:16:41 newest-cni-848035 crio[514]: time="2025-10-19T17:16:41.762303511Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:16:41 newest-cni-848035 crio[514]: time="2025-10-19T17:16:41.764933253Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 19 17:16:41 newest-cni-848035 crio[514]: time="2025-10-19T17:16:41.766040596Z" level=info msg="Ran pod sandbox f039a0d374df465cc551d5721b0186c9c29149745f538e5494a2736bd51878f4 with infra container: kube-system/kindnet-cldtb/POD" id=4b23d5eb-a75f-424a-8cfc-f4d3c587b26a name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 17:16:41 newest-cni-848035 crio[514]: time="2025-10-19T17:16:41.766569447Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=05cf64d1-57fe-473f-9b26-dfa561cd1b7d name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 17:16:41 newest-cni-848035 crio[514]: time="2025-10-19T17:16:41.767516768Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=258298a4-82ef-442c-b34b-63abd40ced06 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:16:41 newest-cni-848035 crio[514]: time="2025-10-19T17:16:41.76828476Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 19 17:16:41 newest-cni-848035 crio[514]: time="2025-10-19T17:16:41.769175959Z" level=info msg="Ran pod sandbox be267c736ec688aa18a3940a8f0b0f4ba175330ed538b5e2133a47987dae89dc with infra container: kube-system/kube-proxy-4xgrb/POD" id=05cf64d1-57fe-473f-9b26-dfa561cd1b7d name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 17:16:41 newest-cni-848035 crio[514]: time="2025-10-19T17:16:41.769209204Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=1837fe2f-b9c5-486e-927c-9489a5462d03 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:16:41 newest-cni-848035 crio[514]: time="2025-10-19T17:16:41.770180274Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=2c9f3de9-2bd1-44a1-ada8-63a932a16728 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:16:41 newest-cni-848035 crio[514]: time="2025-10-19T17:16:41.770473119Z" level=info msg="Creating container: kube-system/kindnet-cldtb/kindnet-cni" id=de18db60-e0be-4722-b305-efec2f851a42 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:16:41 newest-cni-848035 crio[514]: time="2025-10-19T17:16:41.770777859Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:16:41 newest-cni-848035 crio[514]: time="2025-10-19T17:16:41.771291016Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=8a8ef59a-9cb8-45a2-870b-cc12fbc12650 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:16:41 newest-cni-848035 crio[514]: time="2025-10-19T17:16:41.772348293Z" level=info msg="Creating container: kube-system/kube-proxy-4xgrb/kube-proxy" id=9522e815-2412-4a25-983a-c96d76cb4f6a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:16:41 newest-cni-848035 crio[514]: time="2025-10-19T17:16:41.773504104Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:16:41 newest-cni-848035 crio[514]: time="2025-10-19T17:16:41.774916892Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:16:41 newest-cni-848035 crio[514]: time="2025-10-19T17:16:41.775517207Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:16:41 newest-cni-848035 crio[514]: time="2025-10-19T17:16:41.780349346Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:16:41 newest-cni-848035 crio[514]: time="2025-10-19T17:16:41.780934094Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:16:41 newest-cni-848035 crio[514]: time="2025-10-19T17:16:41.807268339Z" level=info msg="Created container 64839406f6a583b3ec2976b1c494c642f16e35e9dc81ce7b11da661dfdc9da24: kube-system/kindnet-cldtb/kindnet-cni" id=de18db60-e0be-4722-b305-efec2f851a42 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:16:41 newest-cni-848035 crio[514]: time="2025-10-19T17:16:41.809325212Z" level=info msg="Starting container: 64839406f6a583b3ec2976b1c494c642f16e35e9dc81ce7b11da661dfdc9da24" id=85779174-a876-4bf6-900a-c889173c06cf name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 17:16:41 newest-cni-848035 crio[514]: time="2025-10-19T17:16:41.812158678Z" level=info msg="Created container bbcdda98e3574c313aedf6fad5937b0627c232e0cdf3ef130c9d87b28d06465f: kube-system/kube-proxy-4xgrb/kube-proxy" id=9522e815-2412-4a25-983a-c96d76cb4f6a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:16:41 newest-cni-848035 crio[514]: time="2025-10-19T17:16:41.812862589Z" level=info msg="Started container" PID=1031 containerID=64839406f6a583b3ec2976b1c494c642f16e35e9dc81ce7b11da661dfdc9da24 description=kube-system/kindnet-cldtb/kindnet-cni id=85779174-a876-4bf6-900a-c889173c06cf name=/runtime.v1.RuntimeService/StartContainer sandboxID=f039a0d374df465cc551d5721b0186c9c29149745f538e5494a2736bd51878f4
	Oct 19 17:16:41 newest-cni-848035 crio[514]: time="2025-10-19T17:16:41.813981896Z" level=info msg="Starting container: bbcdda98e3574c313aedf6fad5937b0627c232e0cdf3ef130c9d87b28d06465f" id=ad23ff0d-0904-4b25-ba89-2bbeba160fa5 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 17:16:41 newest-cni-848035 crio[514]: time="2025-10-19T17:16:41.817279927Z" level=info msg="Started container" PID=1032 containerID=bbcdda98e3574c313aedf6fad5937b0627c232e0cdf3ef130c9d87b28d06465f description=kube-system/kube-proxy-4xgrb/kube-proxy id=ad23ff0d-0904-4b25-ba89-2bbeba160fa5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=be267c736ec688aa18a3940a8f0b0f4ba175330ed538b5e2133a47987dae89dc
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	bbcdda98e3574       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   7 seconds ago       Running             kube-proxy                1                   be267c736ec68       kube-proxy-4xgrb                            kube-system
	64839406f6a58       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   7 seconds ago       Running             kindnet-cni               1                   f039a0d374df4       kindnet-cldtb                               kube-system
	7161f4ea7f312       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   9 seconds ago       Running             etcd                      1                   f5dd9001df0e3       etcd-newest-cni-848035                      kube-system
	82ba1cda1f516       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   9 seconds ago       Running             kube-controller-manager   1                   7dea4bff6c66b       kube-controller-manager-newest-cni-848035   kube-system
	1c8d739ff68a7       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   9 seconds ago       Running             kube-scheduler            1                   eabd9ae8ab8e4       kube-scheduler-newest-cni-848035            kube-system
	82a653d8de936       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   9 seconds ago       Running             kube-apiserver            1                   2c7a515bb7a19       kube-apiserver-newest-cni-848035            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-848035
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-848035
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34
	                    minikube.k8s.io/name=newest-cni-848035
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T17_16_21_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 17:16:18 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-848035
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 17:16:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 17:16:41 +0000   Sun, 19 Oct 2025 17:16:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 17:16:41 +0000   Sun, 19 Oct 2025 17:16:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 17:16:41 +0000   Sun, 19 Oct 2025 17:16:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 19 Oct 2025 17:16:41 +0000   Sun, 19 Oct 2025 17:16:16 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-848035
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                646096bf-34be-4122-8013-0d1b140e3606
	  Boot ID:                    74c6737e-3c93-45ee-b810-4373d2d70b9b
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-848035                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29s
	  kube-system                 kindnet-cldtb                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      22s
	  kube-system                 kube-apiserver-newest-cni-848035             250m (3%)     0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-controller-manager-newest-cni-848035    200m (2%)     0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-proxy-4xgrb                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         22s
	  kube-system                 kube-scheduler-newest-cni-848035             100m (1%)     0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 22s   kube-proxy       
	  Normal  Starting                 6s    kube-proxy       
	  Normal  Starting                 28s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27s   kubelet          Node newest-cni-848035 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27s   kubelet          Node newest-cni-848035 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27s   kubelet          Node newest-cni-848035 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           23s   node-controller  Node newest-cni-848035 event: Registered Node newest-cni-848035 in Controller
	  Normal  RegisteredNode           4s    node-controller  Node newest-cni-848035 event: Registered Node newest-cni-848035 in Controller
	
	
	==> dmesg <==
	[  +0.102617] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027869] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.458770] kauditd_printk_skb: 47 callbacks suppressed
	[Oct19 16:23] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.054620] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023908] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023878] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023848] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023943] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +2.047764] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +4.031517] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +8.127172] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[ +16.382276] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[Oct19 16:24] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	
	
	==> etcd [7161f4ea7f31214468dd438ccf92489be711cdfc8d6872eaa7921269b21b986f] <==
	{"level":"warn","ts":"2025-10-19T17:16:40.073092Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:40.084321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:40.096906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:40.105449Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:40.114243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:40.126635Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:40.136607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:40.144657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:40.155331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:40.162888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:40.181049Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:40.188719Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:40.198954Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:40.207276Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:40.218239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:40.227582Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:40.237729Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:40.252300Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:40.263796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:40.276856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:40.286025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:40.303359Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:40.312219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:40.323215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:40.391536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59828","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 17:16:49 up 59 min,  0 user,  load average: 4.27, 3.17, 1.92
	Linux newest-cni-848035 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [64839406f6a583b3ec2976b1c494c642f16e35e9dc81ce7b11da661dfdc9da24] <==
	I1019 17:16:42.012405       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 17:16:42.012787       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1019 17:16:42.012891       1 main.go:148] setting mtu 1500 for CNI 
	I1019 17:16:42.012905       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 17:16:42.012917       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T17:16:42Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 17:16:42.389208       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 17:16:42.389243       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 17:16:42.389255       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 17:16:42.390182       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1019 17:16:42.789360       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 17:16:42.789389       1 metrics.go:72] Registering metrics
	I1019 17:16:42.789470       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [82a653d8de9363d72328ad5104900829cd8f26df51d681e55e0be8cc95ec3727] <==
	I1019 17:16:41.039910       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1019 17:16:41.040889       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1019 17:16:41.039972       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1019 17:16:41.039951       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1019 17:16:41.039798       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1019 17:16:41.042019       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1019 17:16:41.042110       1 aggregator.go:171] initial CRD sync complete...
	I1019 17:16:41.042134       1 autoregister_controller.go:144] Starting autoregister controller
	I1019 17:16:41.042142       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1019 17:16:41.042151       1 cache.go:39] Caches are synced for autoregister controller
	I1019 17:16:41.047663       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1019 17:16:41.069447       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 17:16:41.073316       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1019 17:16:41.384267       1 controller.go:667] quota admission added evaluator for: namespaces
	I1019 17:16:41.420208       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1019 17:16:41.443901       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 17:16:41.454049       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 17:16:41.462982       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 17:16:41.541049       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.156.229"}
	I1019 17:16:41.556099       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.245.90"}
	I1019 17:16:41.943331       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 17:16:44.808524       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1019 17:16:44.856586       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 17:16:44.909736       1 controller.go:667] quota admission added evaluator for: endpoints
	I1019 17:16:44.909736       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [82ba1cda1f516989069b3e38d150e44948c2f2be79b66e33581f811e725c1136] <==
	I1019 17:16:44.403877       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1019 17:16:44.403896       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1019 17:16:44.403856       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1019 17:16:44.403968       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1019 17:16:44.403996       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1019 17:16:44.404002       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1019 17:16:44.405741       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1019 17:16:44.405741       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1019 17:16:44.405815       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1019 17:16:44.406128       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1019 17:16:44.406151       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1019 17:16:44.407009       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1019 17:16:44.407117       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1019 17:16:44.408238       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 17:16:44.410383       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1019 17:16:44.420692       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1019 17:16:44.420831       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1019 17:16:44.420895       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1019 17:16:44.420906       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1019 17:16:44.420914       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1019 17:16:44.422949       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1019 17:16:44.423111       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1019 17:16:44.423247       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-848035"
	I1019 17:16:44.423315       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1019 17:16:44.430943       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [bbcdda98e3574c313aedf6fad5937b0627c232e0cdf3ef130c9d87b28d06465f] <==
	I1019 17:16:41.854052       1 server_linux.go:53] "Using iptables proxy"
	I1019 17:16:41.911125       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 17:16:42.011903       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 17:16:42.011940       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1019 17:16:42.012052       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 17:16:42.036823       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 17:16:42.036893       1 server_linux.go:132] "Using iptables Proxier"
	I1019 17:16:42.044114       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 17:16:42.044804       1 server.go:527] "Version info" version="v1.34.1"
	I1019 17:16:42.044843       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:16:42.046892       1 config.go:106] "Starting endpoint slice config controller"
	I1019 17:16:42.046908       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 17:16:42.048159       1 config.go:200] "Starting service config controller"
	I1019 17:16:42.048174       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 17:16:42.048537       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 17:16:42.049411       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 17:16:42.048943       1 config.go:309] "Starting node config controller"
	I1019 17:16:42.049439       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 17:16:42.049447       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 17:16:42.147129       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1019 17:16:42.149060       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 17:16:42.149631       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [1c8d739ff68a706977ec10a4d83ed670ded08b67ebc1e618d401e2ecdfa2191e] <==
	I1019 17:16:40.009924       1 serving.go:386] Generated self-signed cert in-memory
	W1019 17:16:40.949473       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1019 17:16:40.949506       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1019 17:16:40.949518       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1019 17:16:40.949527       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1019 17:16:40.989677       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1019 17:16:40.989704       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:16:40.997411       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 17:16:40.997543       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 17:16:41.000623       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1019 17:16:41.000799       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1019 17:16:41.014758       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1019 17:16:41.015176       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1019 17:16:41.015030       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 17:16:41.017585       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1019 17:16:41.017695       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	I1019 17:16:41.098943       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 19 17:16:40 newest-cni-848035 kubelet[662]: E1019 17:16:40.505129     662 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-848035\" not found" node="newest-cni-848035"
	Oct 19 17:16:41 newest-cni-848035 kubelet[662]: I1019 17:16:41.053999     662 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-848035"
	Oct 19 17:16:41 newest-cni-848035 kubelet[662]: I1019 17:16:41.065208     662 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-848035"
	Oct 19 17:16:41 newest-cni-848035 kubelet[662]: I1019 17:16:41.065322     662 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-848035"
	Oct 19 17:16:41 newest-cni-848035 kubelet[662]: I1019 17:16:41.065362     662 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 19 17:16:41 newest-cni-848035 kubelet[662]: I1019 17:16:41.066270     662 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 19 17:16:41 newest-cni-848035 kubelet[662]: E1019 17:16:41.066813     662 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-848035\" already exists" pod="kube-system/etcd-newest-cni-848035"
	Oct 19 17:16:41 newest-cni-848035 kubelet[662]: I1019 17:16:41.066846     662 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-848035"
	Oct 19 17:16:41 newest-cni-848035 kubelet[662]: E1019 17:16:41.074667     662 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-848035\" already exists" pod="kube-system/kube-apiserver-newest-cni-848035"
	Oct 19 17:16:41 newest-cni-848035 kubelet[662]: I1019 17:16:41.074704     662 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-848035"
	Oct 19 17:16:41 newest-cni-848035 kubelet[662]: E1019 17:16:41.081994     662 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-848035\" already exists" pod="kube-system/kube-controller-manager-newest-cni-848035"
	Oct 19 17:16:41 newest-cni-848035 kubelet[662]: I1019 17:16:41.082043     662 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-848035"
	Oct 19 17:16:41 newest-cni-848035 kubelet[662]: E1019 17:16:41.093535     662 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-848035\" already exists" pod="kube-system/kube-scheduler-newest-cni-848035"
	Oct 19 17:16:41 newest-cni-848035 kubelet[662]: I1019 17:16:41.449546     662 apiserver.go:52] "Watching apiserver"
	Oct 19 17:16:41 newest-cni-848035 kubelet[662]: I1019 17:16:41.453256     662 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 19 17:16:41 newest-cni-848035 kubelet[662]: I1019 17:16:41.464528     662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f332f5bc-a940-414b-816d-fe262c303b5a-xtables-lock\") pod \"kube-proxy-4xgrb\" (UID: \"f332f5bc-a940-414b-816d-fe262c303b5a\") " pod="kube-system/kube-proxy-4xgrb"
	Oct 19 17:16:41 newest-cni-848035 kubelet[662]: I1019 17:16:41.464578     662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/3371a4c2-e7be-4f7c-9e77-69cce40a6458-cni-cfg\") pod \"kindnet-cldtb\" (UID: \"3371a4c2-e7be-4f7c-9e77-69cce40a6458\") " pod="kube-system/kindnet-cldtb"
	Oct 19 17:16:41 newest-cni-848035 kubelet[662]: I1019 17:16:41.464613     662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f332f5bc-a940-414b-816d-fe262c303b5a-lib-modules\") pod \"kube-proxy-4xgrb\" (UID: \"f332f5bc-a940-414b-816d-fe262c303b5a\") " pod="kube-system/kube-proxy-4xgrb"
	Oct 19 17:16:41 newest-cni-848035 kubelet[662]: I1019 17:16:41.464633     662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3371a4c2-e7be-4f7c-9e77-69cce40a6458-xtables-lock\") pod \"kindnet-cldtb\" (UID: \"3371a4c2-e7be-4f7c-9e77-69cce40a6458\") " pod="kube-system/kindnet-cldtb"
	Oct 19 17:16:41 newest-cni-848035 kubelet[662]: I1019 17:16:41.464707     662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3371a4c2-e7be-4f7c-9e77-69cce40a6458-lib-modules\") pod \"kindnet-cldtb\" (UID: \"3371a4c2-e7be-4f7c-9e77-69cce40a6458\") " pod="kube-system/kindnet-cldtb"
	Oct 19 17:16:41 newest-cni-848035 kubelet[662]: I1019 17:16:41.504781     662 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-848035"
	Oct 19 17:16:41 newest-cni-848035 kubelet[662]: E1019 17:16:41.511668     662 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-848035\" already exists" pod="kube-system/kube-apiserver-newest-cni-848035"
	Oct 19 17:16:43 newest-cni-848035 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 19 17:16:43 newest-cni-848035 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 19 17:16:43 newest-cni-848035 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-848035 -n newest-cni-848035
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-848035 -n newest-cni-848035: exit status 2 (365.947941ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-848035 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-4r958 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-zr7l6 kubernetes-dashboard-855c9754f9-4ttrr
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-848035 describe pod coredns-66bc5c9577-4r958 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-zr7l6 kubernetes-dashboard-855c9754f9-4ttrr
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-848035 describe pod coredns-66bc5c9577-4r958 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-zr7l6 kubernetes-dashboard-855c9754f9-4ttrr: exit status 1 (78.74833ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-4r958" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-zr7l6" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-4ttrr" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-848035 describe pod coredns-66bc5c9577-4r958 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-zr7l6 kubernetes-dashboard-855c9754f9-4ttrr: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (6.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (6.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-090139 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-090139 --alsologtostderr -v=1: exit status 80 (2.496827628s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-090139 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 17:17:20.303643  286823 out.go:360] Setting OutFile to fd 1 ...
	I1019 17:17:20.303896  286823 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:17:20.303908  286823 out.go:374] Setting ErrFile to fd 2...
	I1019 17:17:20.303915  286823 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:17:20.304164  286823 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3731/.minikube/bin
	I1019 17:17:20.304390  286823 out.go:368] Setting JSON to false
	I1019 17:17:20.304428  286823 mustload.go:66] Loading cluster: embed-certs-090139
	I1019 17:17:20.304789  286823 config.go:182] Loaded profile config "embed-certs-090139": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:17:20.305181  286823 cli_runner.go:164] Run: docker container inspect embed-certs-090139 --format={{.State.Status}}
	I1019 17:17:20.324350  286823 host.go:66] Checking if "embed-certs-090139" exists ...
	I1019 17:17:20.324719  286823 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:17:20.396376  286823 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:87 SystemTime:2025-10-19 17:17:20.383454876 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 17:17:20.397037  286823 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-090139 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1019 17:17:20.398917  286823 out.go:179] * Pausing node embed-certs-090139 ... 
	I1019 17:17:20.400165  286823 host.go:66] Checking if "embed-certs-090139" exists ...
	I1019 17:17:20.400420  286823 ssh_runner.go:195] Run: systemctl --version
	I1019 17:17:20.400458  286823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-090139
	I1019 17:17:20.421487  286823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33089 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/embed-certs-090139/id_rsa Username:docker}
	I1019 17:17:20.522123  286823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:17:20.535165  286823 pause.go:52] kubelet running: true
	I1019 17:17:20.535233  286823 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 17:17:20.692484  286823 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 17:17:20.692587  286823 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 17:17:20.765244  286823 cri.go:89] found id: "db2380c01b5a9e27881495bdbbb23cd4d9a4f1a24834f3b8b8bfeec346b8dcae"
	I1019 17:17:20.765270  286823 cri.go:89] found id: "f28bfcad6c405761f300339ad1d2a3ab9ac98c74395fd2d648954d7a5021f311"
	I1019 17:17:20.765276  286823 cri.go:89] found id: "032a52a6872256e9477b486431f9879e94f744c0af17fc0c51bc366d518fd888"
	I1019 17:17:20.765279  286823 cri.go:89] found id: "2019570c30b89ab8c351e4d64d6ddd8cc33437e4b912376c44b0d230f8bce722"
	I1019 17:17:20.765282  286823 cri.go:89] found id: "0a03ae2cd978a67ae2325f57237113942f56a65c39a49b00b59543933475e052"
	I1019 17:17:20.765285  286823 cri.go:89] found id: "3c6fd3249cca231ede96171d1c7342f490e2c1970dd6df69631ba08bbac70dda"
	I1019 17:17:20.765288  286823 cri.go:89] found id: "8c97264fa8b225a884756f7a5ec2d9e5e99aa8adb8765570ed3a783b339f1d85"
	I1019 17:17:20.765292  286823 cri.go:89] found id: "7269af7f81934d889a105bbbc2b1ebea2710e7a60bf8ecc35fb25c89f259a974"
	I1019 17:17:20.765295  286823 cri.go:89] found id: "d957ab9f5db999a8e3d596f5eb09406aefbc41ab698ebeda9c2f79b429ea08a0"
	I1019 17:17:20.765303  286823 cri.go:89] found id: "c41b3f083e0df7059c689e87000a2d83bb8eceaf028e0d55e1936c91a7f332f5"
	I1019 17:17:20.765306  286823 cri.go:89] found id: "39be18dfee0d94b64b58273530929a496bf5ad72be01310a470fdbb249d21670"
	I1019 17:17:20.765310  286823 cri.go:89] found id: ""
	I1019 17:17:20.765355  286823 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 17:17:20.777526  286823 retry.go:31] will retry after 349.465138ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:17:20Z" level=error msg="open /run/runc: no such file or directory"
	I1019 17:17:21.128156  286823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:17:21.142272  286823 pause.go:52] kubelet running: false
	I1019 17:17:21.142336  286823 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 17:17:21.290273  286823 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 17:17:21.290356  286823 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 17:17:21.363835  286823 cri.go:89] found id: "db2380c01b5a9e27881495bdbbb23cd4d9a4f1a24834f3b8b8bfeec346b8dcae"
	I1019 17:17:21.363863  286823 cri.go:89] found id: "f28bfcad6c405761f300339ad1d2a3ab9ac98c74395fd2d648954d7a5021f311"
	I1019 17:17:21.363869  286823 cri.go:89] found id: "032a52a6872256e9477b486431f9879e94f744c0af17fc0c51bc366d518fd888"
	I1019 17:17:21.363875  286823 cri.go:89] found id: "2019570c30b89ab8c351e4d64d6ddd8cc33437e4b912376c44b0d230f8bce722"
	I1019 17:17:21.363881  286823 cri.go:89] found id: "0a03ae2cd978a67ae2325f57237113942f56a65c39a49b00b59543933475e052"
	I1019 17:17:21.363886  286823 cri.go:89] found id: "3c6fd3249cca231ede96171d1c7342f490e2c1970dd6df69631ba08bbac70dda"
	I1019 17:17:21.363891  286823 cri.go:89] found id: "8c97264fa8b225a884756f7a5ec2d9e5e99aa8adb8765570ed3a783b339f1d85"
	I1019 17:17:21.363895  286823 cri.go:89] found id: "7269af7f81934d889a105bbbc2b1ebea2710e7a60bf8ecc35fb25c89f259a974"
	I1019 17:17:21.363900  286823 cri.go:89] found id: "d957ab9f5db999a8e3d596f5eb09406aefbc41ab698ebeda9c2f79b429ea08a0"
	I1019 17:17:21.363909  286823 cri.go:89] found id: "c41b3f083e0df7059c689e87000a2d83bb8eceaf028e0d55e1936c91a7f332f5"
	I1019 17:17:21.363915  286823 cri.go:89] found id: "39be18dfee0d94b64b58273530929a496bf5ad72be01310a470fdbb249d21670"
	I1019 17:17:21.363927  286823 cri.go:89] found id: ""
	I1019 17:17:21.363986  286823 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 17:17:21.377133  286823 retry.go:31] will retry after 379.829438ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:17:21Z" level=error msg="open /run/runc: no such file or directory"
	I1019 17:17:21.757746  286823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:17:21.771533  286823 pause.go:52] kubelet running: false
	I1019 17:17:21.771617  286823 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 17:17:21.936662  286823 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 17:17:21.936766  286823 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 17:17:22.010170  286823 cri.go:89] found id: "db2380c01b5a9e27881495bdbbb23cd4d9a4f1a24834f3b8b8bfeec346b8dcae"
	I1019 17:17:22.010202  286823 cri.go:89] found id: "f28bfcad6c405761f300339ad1d2a3ab9ac98c74395fd2d648954d7a5021f311"
	I1019 17:17:22.010208  286823 cri.go:89] found id: "032a52a6872256e9477b486431f9879e94f744c0af17fc0c51bc366d518fd888"
	I1019 17:17:22.010213  286823 cri.go:89] found id: "2019570c30b89ab8c351e4d64d6ddd8cc33437e4b912376c44b0d230f8bce722"
	I1019 17:17:22.010217  286823 cri.go:89] found id: "0a03ae2cd978a67ae2325f57237113942f56a65c39a49b00b59543933475e052"
	I1019 17:17:22.010222  286823 cri.go:89] found id: "3c6fd3249cca231ede96171d1c7342f490e2c1970dd6df69631ba08bbac70dda"
	I1019 17:17:22.010226  286823 cri.go:89] found id: "8c97264fa8b225a884756f7a5ec2d9e5e99aa8adb8765570ed3a783b339f1d85"
	I1019 17:17:22.010230  286823 cri.go:89] found id: "7269af7f81934d889a105bbbc2b1ebea2710e7a60bf8ecc35fb25c89f259a974"
	I1019 17:17:22.010234  286823 cri.go:89] found id: "d957ab9f5db999a8e3d596f5eb09406aefbc41ab698ebeda9c2f79b429ea08a0"
	I1019 17:17:22.010254  286823 cri.go:89] found id: "c41b3f083e0df7059c689e87000a2d83bb8eceaf028e0d55e1936c91a7f332f5"
	I1019 17:17:22.010260  286823 cri.go:89] found id: "39be18dfee0d94b64b58273530929a496bf5ad72be01310a470fdbb249d21670"
	I1019 17:17:22.010263  286823 cri.go:89] found id: ""
	I1019 17:17:22.010309  286823 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 17:17:22.022302  286823 retry.go:31] will retry after 457.885288ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:17:22Z" level=error msg="open /run/runc: no such file or directory"
	I1019 17:17:22.480681  286823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:17:22.495242  286823 pause.go:52] kubelet running: false
	I1019 17:17:22.495325  286823 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 17:17:22.653207  286823 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 17:17:22.653292  286823 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 17:17:22.724034  286823 cri.go:89] found id: "db2380c01b5a9e27881495bdbbb23cd4d9a4f1a24834f3b8b8bfeec346b8dcae"
	I1019 17:17:22.724060  286823 cri.go:89] found id: "f28bfcad6c405761f300339ad1d2a3ab9ac98c74395fd2d648954d7a5021f311"
	I1019 17:17:22.724064  286823 cri.go:89] found id: "032a52a6872256e9477b486431f9879e94f744c0af17fc0c51bc366d518fd888"
	I1019 17:17:22.724102  286823 cri.go:89] found id: "2019570c30b89ab8c351e4d64d6ddd8cc33437e4b912376c44b0d230f8bce722"
	I1019 17:17:22.724107  286823 cri.go:89] found id: "0a03ae2cd978a67ae2325f57237113942f56a65c39a49b00b59543933475e052"
	I1019 17:17:22.724112  286823 cri.go:89] found id: "3c6fd3249cca231ede96171d1c7342f490e2c1970dd6df69631ba08bbac70dda"
	I1019 17:17:22.724117  286823 cri.go:89] found id: "8c97264fa8b225a884756f7a5ec2d9e5e99aa8adb8765570ed3a783b339f1d85"
	I1019 17:17:22.724121  286823 cri.go:89] found id: "7269af7f81934d889a105bbbc2b1ebea2710e7a60bf8ecc35fb25c89f259a974"
	I1019 17:17:22.724126  286823 cri.go:89] found id: "d957ab9f5db999a8e3d596f5eb09406aefbc41ab698ebeda9c2f79b429ea08a0"
	I1019 17:17:22.724133  286823 cri.go:89] found id: "c41b3f083e0df7059c689e87000a2d83bb8eceaf028e0d55e1936c91a7f332f5"
	I1019 17:17:22.724144  286823 cri.go:89] found id: "39be18dfee0d94b64b58273530929a496bf5ad72be01310a470fdbb249d21670"
	I1019 17:17:22.724147  286823 cri.go:89] found id: ""
	I1019 17:17:22.724191  286823 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 17:17:22.740667  286823 out.go:203] 
	W1019 17:17:22.742120  286823 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:17:22Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:17:22Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 17:17:22.742142  286823 out.go:285] * 
	* 
	W1019 17:17:22.746182  286823 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 17:17:22.747447  286823 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-090139 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-090139
helpers_test.go:243: (dbg) docker inspect embed-certs-090139:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "491b138dfd3b0f8450a43a099d1e6d3f34448c3605ec02b7f98ffd5aefd0c3d3",
	        "Created": "2025-10-19T17:15:20.164222926Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 269072,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T17:16:24.786634875Z",
	            "FinishedAt": "2025-10-19T17:16:23.947799143Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/491b138dfd3b0f8450a43a099d1e6d3f34448c3605ec02b7f98ffd5aefd0c3d3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/491b138dfd3b0f8450a43a099d1e6d3f34448c3605ec02b7f98ffd5aefd0c3d3/hostname",
	        "HostsPath": "/var/lib/docker/containers/491b138dfd3b0f8450a43a099d1e6d3f34448c3605ec02b7f98ffd5aefd0c3d3/hosts",
	        "LogPath": "/var/lib/docker/containers/491b138dfd3b0f8450a43a099d1e6d3f34448c3605ec02b7f98ffd5aefd0c3d3/491b138dfd3b0f8450a43a099d1e6d3f34448c3605ec02b7f98ffd5aefd0c3d3-json.log",
	        "Name": "/embed-certs-090139",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-090139:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-090139",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "491b138dfd3b0f8450a43a099d1e6d3f34448c3605ec02b7f98ffd5aefd0c3d3",
	                "LowerDir": "/var/lib/docker/overlay2/adea2bc670d3c2f94262acc648cd1d97c1ba620ee9d7f9af5505590dd624f110-init/diff:/var/lib/docker/overlay2/0516a6de822f96a429e000bb6523437ca93a66a6aecaba561c6abf45d0d1defe/diff",
	                "MergedDir": "/var/lib/docker/overlay2/adea2bc670d3c2f94262acc648cd1d97c1ba620ee9d7f9af5505590dd624f110/merged",
	                "UpperDir": "/var/lib/docker/overlay2/adea2bc670d3c2f94262acc648cd1d97c1ba620ee9d7f9af5505590dd624f110/diff",
	                "WorkDir": "/var/lib/docker/overlay2/adea2bc670d3c2f94262acc648cd1d97c1ba620ee9d7f9af5505590dd624f110/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-090139",
	                "Source": "/var/lib/docker/volumes/embed-certs-090139/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-090139",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-090139",
	                "name.minikube.sigs.k8s.io": "embed-certs-090139",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f9a51b9c047845ad7b5390c71c1f39794dc5f446cf895f8b70aa9ff12768bad1",
	            "SandboxKey": "/var/run/docker/netns/f9a51b9c0478",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-090139": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8a:3e:6e:ef:ec:38",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9f3b41047906a4786b547f272192944794206cd82d35412a1c4498289619b68a",
	                    "EndpointID": "23d8a85c3c449fc368daa01d1bf0d44d13c92115cac370611ee2fe236baca9b5",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-090139",
	                        "491b138dfd3b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-090139 -n embed-certs-090139
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-090139 -n embed-certs-090139: exit status 2 (367.296954ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-090139 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-090139 logs -n 25: (1.311301276s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p no-preload-806996                                                                                                                                                                                                                          │ no-preload-806996            │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ start   │ -p newest-cni-848035 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-848035            │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:16 UTC │
	│ addons  │ enable metrics-server -p embed-certs-090139 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-090139           │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ stop    │ -p embed-certs-090139 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-090139           │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:16 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-663015 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-663015 │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-663015 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-663015 │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:16 UTC │
	│ addons  │ enable dashboard -p embed-certs-090139 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-090139           │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:16 UTC │
	│ start   │ -p embed-certs-090139 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-090139           │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:17 UTC │
	│ addons  │ enable metrics-server -p newest-cni-848035 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-848035            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ stop    │ -p newest-cni-848035 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-848035            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:16 UTC │
	│ addons  │ enable dashboard -p newest-cni-848035 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-848035            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:16 UTC │
	│ start   │ -p newest-cni-848035 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-848035            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:16 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-663015 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-663015 │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:16 UTC │
	│ start   │ -p default-k8s-diff-port-663015 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-663015 │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ image   │ newest-cni-848035 image list --format=json                                                                                                                                                                                                    │ newest-cni-848035            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:16 UTC │
	│ pause   │ -p newest-cni-848035 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-848035            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ delete  │ -p newest-cni-848035                                                                                                                                                                                                                          │ newest-cni-848035            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:16 UTC │
	│ delete  │ -p newest-cni-848035                                                                                                                                                                                                                          │ newest-cni-848035            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:16 UTC │
	│ start   │ -p auto-624324 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-624324                  │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ start   │ -p kubernetes-upgrade-318879 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-318879    │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ start   │ -p kubernetes-upgrade-318879 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-318879    │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:17 UTC │
	│ delete  │ -p kubernetes-upgrade-318879                                                                                                                                                                                                                  │ kubernetes-upgrade-318879    │ jenkins │ v1.37.0 │ 19 Oct 25 17:17 UTC │ 19 Oct 25 17:17 UTC │
	│ start   │ -p kindnet-624324 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio                                                                                                      │ kindnet-624324               │ jenkins │ v1.37.0 │ 19 Oct 25 17:17 UTC │                     │
	│ image   │ embed-certs-090139 image list --format=json                                                                                                                                                                                                   │ embed-certs-090139           │ jenkins │ v1.37.0 │ 19 Oct 25 17:17 UTC │ 19 Oct 25 17:17 UTC │
	│ pause   │ -p embed-certs-090139 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-090139           │ jenkins │ v1.37.0 │ 19 Oct 25 17:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 17:17:07
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 17:17:07.911145  284195 out.go:360] Setting OutFile to fd 1 ...
	I1019 17:17:07.911485  284195 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:17:07.911498  284195 out.go:374] Setting ErrFile to fd 2...
	I1019 17:17:07.911504  284195 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:17:07.911744  284195 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3731/.minikube/bin
	I1019 17:17:07.912432  284195 out.go:368] Setting JSON to false
	I1019 17:17:07.913922  284195 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3574,"bootTime":1760890654,"procs":358,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 17:17:07.914040  284195 start.go:143] virtualization: kvm guest
	I1019 17:17:07.916449  284195 out.go:179] * [kindnet-624324] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 17:17:07.918177  284195 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 17:17:07.918173  284195 notify.go:221] Checking for updates...
	I1019 17:17:07.919770  284195 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 17:17:07.921268  284195 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-3731/kubeconfig
	I1019 17:17:07.922623  284195 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-3731/.minikube
	I1019 17:17:07.924398  284195 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 17:17:07.926316  284195 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 17:17:07.928030  284195 config.go:182] Loaded profile config "auto-624324": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:17:07.928161  284195 config.go:182] Loaded profile config "default-k8s-diff-port-663015": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:17:07.928272  284195 config.go:182] Loaded profile config "embed-certs-090139": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:17:07.928385  284195 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 17:17:07.955289  284195 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1019 17:17:07.955374  284195 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:17:08.021597  284195 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-19 17:17:08.010265899 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 17:17:08.021703  284195 docker.go:319] overlay module found
	I1019 17:17:08.023523  284195 out.go:179] * Using the docker driver based on user configuration
	I1019 17:17:08.024696  284195 start.go:309] selected driver: docker
	I1019 17:17:08.024713  284195 start.go:930] validating driver "docker" against <nil>
	I1019 17:17:08.024724  284195 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 17:17:08.025299  284195 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:17:08.095598  284195 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-19 17:17:08.082271965 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 17:17:08.095740  284195 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1019 17:17:08.095956  284195 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 17:17:08.097623  284195 out.go:179] * Using Docker driver with root privileges
	I1019 17:17:08.098923  284195 cni.go:84] Creating CNI manager for "kindnet"
	I1019 17:17:08.098952  284195 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1019 17:17:08.099199  284195 start.go:353] cluster config:
	{Name:kindnet-624324 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-624324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:17:08.101273  284195 out.go:179] * Starting "kindnet-624324" primary control-plane node in "kindnet-624324" cluster
	I1019 17:17:08.102517  284195 cache.go:124] Beginning downloading kic base image for docker with crio
	I1019 17:17:08.103771  284195 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 17:17:08.105096  284195 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:17:08.105143  284195 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 17:17:08.105147  284195 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-3731/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1019 17:17:08.105160  284195 cache.go:59] Caching tarball of preloaded images
	I1019 17:17:08.105264  284195 preload.go:233] Found /home/jenkins/minikube-integration/21683-3731/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1019 17:17:08.105288  284195 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 17:17:08.105403  284195 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/kindnet-624324/config.json ...
	I1019 17:17:08.105428  284195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/kindnet-624324/config.json: {Name:mk605196bae1a2f9aab06ad07829a616de1a599f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:17:08.129932  284195 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 17:17:08.129955  284195 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 17:17:08.129973  284195 cache.go:233] Successfully downloaded all kic artifacts
	I1019 17:17:08.130004  284195 start.go:360] acquireMachinesLock for kindnet-624324: {Name:mk2a20d18414afeb65441c6d6d63ed8b022dba64 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 17:17:08.130127  284195 start.go:364] duration metric: took 99.58µs to acquireMachinesLock for "kindnet-624324"
	I1019 17:17:08.130157  284195 start.go:93] Provisioning new machine with config: &{Name:kindnet-624324 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-624324 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 17:17:08.130245  284195 start.go:125] createHost starting for "" (driver="docker")
	W1019 17:17:03.967488  274481 pod_ready.go:104] pod "coredns-66bc5c9577-2r8tf" is not "Ready", error: <nil>
	W1019 17:17:05.968220  274481 pod_ready.go:104] pod "coredns-66bc5c9577-2r8tf" is not "Ready", error: <nil>
	W1019 17:17:05.092522  268862 pod_ready.go:104] pod "coredns-66bc5c9577-zw7d8" is not "Ready", error: <nil>
	I1019 17:17:07.092751  268862 pod_ready.go:94] pod "coredns-66bc5c9577-zw7d8" is "Ready"
	I1019 17:17:07.092783  268862 pod_ready.go:86] duration metric: took 32.005446848s for pod "coredns-66bc5c9577-zw7d8" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:17:07.095535  268862 pod_ready.go:83] waiting for pod "etcd-embed-certs-090139" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:17:07.099757  268862 pod_ready.go:94] pod "etcd-embed-certs-090139" is "Ready"
	I1019 17:17:07.099784  268862 pod_ready.go:86] duration metric: took 4.218129ms for pod "etcd-embed-certs-090139" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:17:07.101852  268862 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-090139" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:17:07.105603  268862 pod_ready.go:94] pod "kube-apiserver-embed-certs-090139" is "Ready"
	I1019 17:17:07.105627  268862 pod_ready.go:86] duration metric: took 3.749516ms for pod "kube-apiserver-embed-certs-090139" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:17:07.107509  268862 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-090139" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:17:07.291717  268862 pod_ready.go:94] pod "kube-controller-manager-embed-certs-090139" is "Ready"
	I1019 17:17:07.291746  268862 pod_ready.go:86] duration metric: took 184.216962ms for pod "kube-controller-manager-embed-certs-090139" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:17:07.490889  268862 pod_ready.go:83] waiting for pod "kube-proxy-8f4lh" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:17:07.891341  268862 pod_ready.go:94] pod "kube-proxy-8f4lh" is "Ready"
	I1019 17:17:07.891372  268862 pod_ready.go:86] duration metric: took 400.457732ms for pod "kube-proxy-8f4lh" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:17:08.091752  268862 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-090139" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:17:08.491497  268862 pod_ready.go:94] pod "kube-scheduler-embed-certs-090139" is "Ready"
	I1019 17:17:08.491528  268862 pod_ready.go:86] duration metric: took 399.738799ms for pod "kube-scheduler-embed-certs-090139" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:17:08.491543  268862 pod_ready.go:40] duration metric: took 33.407732288s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 17:17:08.547078  268862 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1019 17:17:08.549485  268862 out.go:179] * Done! kubectl is now configured to use "embed-certs-090139" cluster and "default" namespace by default
	I1019 17:17:07.713638  279986 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 17:17:07.728095  279986 ssh_runner.go:195] Run: openssl version
	I1019 17:17:07.735460  279986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/72282.pem && ln -fs /usr/share/ca-certificates/72282.pem /etc/ssl/certs/72282.pem"
	I1019 17:17:07.744817  279986 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/72282.pem
	I1019 17:17:07.749085  279986 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 16:26 /usr/share/ca-certificates/72282.pem
	I1019 17:17:07.749148  279986 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/72282.pem
	I1019 17:17:07.785565  279986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/72282.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 17:17:07.794701  279986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 17:17:07.803505  279986 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:17:07.807525  279986 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 16:21 /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:17:07.807613  279986 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:17:07.846607  279986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 17:17:07.855942  279986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7228.pem && ln -fs /usr/share/ca-certificates/7228.pem /etc/ssl/certs/7228.pem"
	I1019 17:17:07.864812  279986 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7228.pem
	I1019 17:17:07.869099  279986 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 16:26 /usr/share/ca-certificates/7228.pem
	I1019 17:17:07.869159  279986 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7228.pem
	I1019 17:17:07.909215  279986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7228.pem /etc/ssl/certs/51391683.0"
	I1019 17:17:07.919715  279986 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 17:17:07.923900  279986 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1019 17:17:07.923974  279986 kubeadm.go:401] StartCluster: {Name:auto-624324 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-624324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:17:07.924074  279986 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 17:17:07.924137  279986 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 17:17:07.954304  279986 cri.go:89] found id: ""
	I1019 17:17:07.954371  279986 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 17:17:07.963082  279986 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1019 17:17:07.973291  279986 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1019 17:17:07.973354  279986 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1019 17:17:07.984866  279986 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1019 17:17:07.984885  279986 kubeadm.go:158] found existing configuration files:
	
	I1019 17:17:07.984943  279986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1019 17:17:07.995186  279986 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1019 17:17:07.995249  279986 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1019 17:17:08.004930  279986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1019 17:17:08.014144  279986 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1019 17:17:08.014214  279986 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1019 17:17:08.022823  279986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1019 17:17:08.031130  279986 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1019 17:17:08.031192  279986 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1019 17:17:08.038841  279986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1019 17:17:08.049250  279986 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1019 17:17:08.049306  279986 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1019 17:17:08.058309  279986 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1019 17:17:08.127897  279986 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1019 17:17:08.204987  279986 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1019 17:17:08.132894  284195 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1019 17:17:08.133202  284195 start.go:159] libmachine.API.Create for "kindnet-624324" (driver="docker")
	I1019 17:17:08.133239  284195 client.go:171] LocalClient.Create starting
	I1019 17:17:08.133308  284195 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem
	I1019 17:17:08.133345  284195 main.go:143] libmachine: Decoding PEM data...
	I1019 17:17:08.133366  284195 main.go:143] libmachine: Parsing certificate...
	I1019 17:17:08.133448  284195 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-3731/.minikube/certs/cert.pem
	I1019 17:17:08.133485  284195 main.go:143] libmachine: Decoding PEM data...
	I1019 17:17:08.133501  284195 main.go:143] libmachine: Parsing certificate...
	I1019 17:17:08.133954  284195 cli_runner.go:164] Run: docker network inspect kindnet-624324 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1019 17:17:08.155830  284195 cli_runner.go:211] docker network inspect kindnet-624324 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1019 17:17:08.155936  284195 network_create.go:284] running [docker network inspect kindnet-624324] to gather additional debugging logs...
	I1019 17:17:08.155963  284195 cli_runner.go:164] Run: docker network inspect kindnet-624324
	W1019 17:17:08.178825  284195 cli_runner.go:211] docker network inspect kindnet-624324 returned with exit code 1
	I1019 17:17:08.178879  284195 network_create.go:287] error running [docker network inspect kindnet-624324]: docker network inspect kindnet-624324: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-624324 not found
	I1019 17:17:08.178900  284195 network_create.go:289] output of [docker network inspect kindnet-624324]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-624324 not found
	
	** /stderr **
	I1019 17:17:08.179015  284195 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 17:17:08.201485  284195 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-96cf7041f267 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:7a:ea:91:e3:37:25} reservation:<nil>}
	I1019 17:17:08.202620  284195 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-0f2c415cfca9 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:96:f0:8a:e9:5f:de} reservation:<nil>}
	I1019 17:17:08.203616  284195 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ca739aebb768 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:a6:81:0d:b3:5e:ec} reservation:<nil>}
	I1019 17:17:08.204577  284195 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-a9c8e7e3ba20 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:3e:77:c0:aa:7f:5e} reservation:<nil>}
	I1019 17:17:08.205661  284195 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-11e31399831a IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:62:85:d0:14:cb:57} reservation:<nil>}
	I1019 17:17:08.207035  284195 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0018972a0}
	I1019 17:17:08.207140  284195 network_create.go:124] attempt to create docker network kindnet-624324 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1019 17:17:08.207204  284195 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-624324 kindnet-624324
	I1019 17:17:08.273186  284195 network_create.go:108] docker network kindnet-624324 192.168.94.0/24 created
	I1019 17:17:08.273220  284195 kic.go:121] calculated static IP "192.168.94.2" for the "kindnet-624324" container
	I1019 17:17:08.273287  284195 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1019 17:17:08.293001  284195 cli_runner.go:164] Run: docker volume create kindnet-624324 --label name.minikube.sigs.k8s.io=kindnet-624324 --label created_by.minikube.sigs.k8s.io=true
	I1019 17:17:08.317157  284195 oci.go:103] Successfully created a docker volume kindnet-624324
	I1019 17:17:08.317263  284195 cli_runner.go:164] Run: docker run --rm --name kindnet-624324-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-624324 --entrypoint /usr/bin/test -v kindnet-624324:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1019 17:17:08.752902  284195 oci.go:107] Successfully prepared a docker volume kindnet-624324
	I1019 17:17:08.752939  284195 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:17:08.752966  284195 kic.go:194] Starting extracting preloaded images to volume ...
	I1019 17:17:08.753044  284195 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-3731/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-624324:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	W1019 17:17:08.468710  274481 pod_ready.go:104] pod "coredns-66bc5c9577-2r8tf" is not "Ready", error: <nil>
	W1019 17:17:10.967920  274481 pod_ready.go:104] pod "coredns-66bc5c9577-2r8tf" is not "Ready", error: <nil>
	I1019 17:17:13.285288  284195 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-3731/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-624324:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.53219807s)
	I1019 17:17:13.285328  284195 kic.go:203] duration metric: took 4.532358026s to extract preloaded images to volume ...
	W1019 17:17:13.285446  284195 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1019 17:17:13.285503  284195 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1019 17:17:13.285581  284195 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1019 17:17:13.352912  284195 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-624324 --name kindnet-624324 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-624324 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-624324 --network kindnet-624324 --ip 192.168.94.2 --volume kindnet-624324:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1019 17:17:13.673210  284195 cli_runner.go:164] Run: docker container inspect kindnet-624324 --format={{.State.Running}}
	I1019 17:17:13.694043  284195 cli_runner.go:164] Run: docker container inspect kindnet-624324 --format={{.State.Status}}
	I1019 17:17:13.715221  284195 cli_runner.go:164] Run: docker exec kindnet-624324 stat /var/lib/dpkg/alternatives/iptables
	I1019 17:17:13.769251  284195 oci.go:144] the created container "kindnet-624324" has a running status.
	I1019 17:17:13.769295  284195 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-3731/.minikube/machines/kindnet-624324/id_rsa...
	I1019 17:17:14.395511  284195 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-3731/.minikube/machines/kindnet-624324/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1019 17:17:14.428761  284195 cli_runner.go:164] Run: docker container inspect kindnet-624324 --format={{.State.Status}}
	I1019 17:17:14.452564  284195 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1019 17:17:14.452592  284195 kic_runner.go:114] Args: [docker exec --privileged kindnet-624324 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1019 17:17:14.506838  284195 cli_runner.go:164] Run: docker container inspect kindnet-624324 --format={{.State.Status}}
	I1019 17:17:14.528132  284195 machine.go:94] provisionDockerMachine start ...
	I1019 17:17:14.528220  284195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-624324
	I1019 17:17:14.554803  284195 main.go:143] libmachine: Using SSH client type: native
	I1019 17:17:14.555131  284195 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33109 <nil> <nil>}
	I1019 17:17:14.555150  284195 main.go:143] libmachine: About to run SSH command:
	hostname
	I1019 17:17:14.700828  284195 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-624324
	
	I1019 17:17:14.700860  284195 ubuntu.go:182] provisioning hostname "kindnet-624324"
	I1019 17:17:14.700930  284195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-624324
	I1019 17:17:14.722903  284195 main.go:143] libmachine: Using SSH client type: native
	I1019 17:17:14.723210  284195 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33109 <nil> <nil>}
	I1019 17:17:14.723247  284195 main.go:143] libmachine: About to run SSH command:
	sudo hostname kindnet-624324 && echo "kindnet-624324" | sudo tee /etc/hostname
	I1019 17:17:14.871535  284195 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-624324
	
	I1019 17:17:14.871620  284195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-624324
	I1019 17:17:14.894483  284195 main.go:143] libmachine: Using SSH client type: native
	I1019 17:17:14.894729  284195 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33109 <nil> <nil>}
	I1019 17:17:14.894785  284195 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-624324' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-624324/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-624324' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 17:17:15.034728  284195 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1019 17:17:15.034765  284195 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-3731/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-3731/.minikube}
	I1019 17:17:15.034806  284195 ubuntu.go:190] setting up certificates
	I1019 17:17:15.034822  284195 provision.go:84] configureAuth start
	I1019 17:17:15.034881  284195 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-624324
	I1019 17:17:15.053543  284195 provision.go:143] copyHostCerts
	I1019 17:17:15.053608  284195 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-3731/.minikube/ca.pem, removing ...
	I1019 17:17:15.053619  284195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-3731/.minikube/ca.pem
	I1019 17:17:15.053704  284195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-3731/.minikube/ca.pem (1082 bytes)
	I1019 17:17:15.053838  284195 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-3731/.minikube/cert.pem, removing ...
	I1019 17:17:15.053855  284195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-3731/.minikube/cert.pem
	I1019 17:17:15.053906  284195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-3731/.minikube/cert.pem (1123 bytes)
	I1019 17:17:15.053998  284195 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-3731/.minikube/key.pem, removing ...
	I1019 17:17:15.054008  284195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-3731/.minikube/key.pem
	I1019 17:17:15.054046  284195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-3731/.minikube/key.pem (1679 bytes)
	I1019 17:17:15.054139  284195 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-3731/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca-key.pem org=jenkins.kindnet-624324 san=[127.0.0.1 192.168.94.2 kindnet-624324 localhost minikube]
	I1019 17:17:15.700540  284195 provision.go:177] copyRemoteCerts
	I1019 17:17:15.700610  284195 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 17:17:15.700657  284195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-624324
	I1019 17:17:15.719415  284195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/kindnet-624324/id_rsa Username:docker}
	I1019 17:17:15.815615  284195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 17:17:15.835374  284195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1019 17:17:15.853394  284195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 17:17:15.870817  284195 provision.go:87] duration metric: took 835.979766ms to configureAuth
	I1019 17:17:15.870844  284195 ubuntu.go:206] setting minikube options for container-runtime
	I1019 17:17:15.870987  284195 config.go:182] Loaded profile config "kindnet-624324": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:17:15.871098  284195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-624324
	I1019 17:17:15.889170  284195 main.go:143] libmachine: Using SSH client type: native
	I1019 17:17:15.889386  284195 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33109 <nil> <nil>}
	I1019 17:17:15.889409  284195 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 17:17:16.132058  284195 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 17:17:16.132101  284195 machine.go:97] duration metric: took 1.6039452s to provisionDockerMachine
	I1019 17:17:16.132114  284195 client.go:174] duration metric: took 7.998864881s to LocalClient.Create
	I1019 17:17:16.132140  284195 start.go:167] duration metric: took 7.998941099s to libmachine.API.Create "kindnet-624324"
	I1019 17:17:16.132152  284195 start.go:293] postStartSetup for "kindnet-624324" (driver="docker")
	I1019 17:17:16.132164  284195 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 17:17:16.132222  284195 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 17:17:16.132276  284195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-624324
	I1019 17:17:16.153529  284195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/kindnet-624324/id_rsa Username:docker}
	I1019 17:17:16.260735  284195 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 17:17:16.265310  284195 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 17:17:16.265345  284195 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 17:17:16.265359  284195 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-3731/.minikube/addons for local assets ...
	I1019 17:17:16.265411  284195 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-3731/.minikube/files for local assets ...
	I1019 17:17:16.265500  284195 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-3731/.minikube/files/etc/ssl/certs/72282.pem -> 72282.pem in /etc/ssl/certs
	I1019 17:17:16.265628  284195 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 17:17:16.275522  284195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/files/etc/ssl/certs/72282.pem --> /etc/ssl/certs/72282.pem (1708 bytes)
	I1019 17:17:16.299434  284195 start.go:296] duration metric: took 167.267905ms for postStartSetup
	I1019 17:17:16.299836  284195 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-624324
	I1019 17:17:16.321458  284195 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/kindnet-624324/config.json ...
	I1019 17:17:16.321754  284195 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 17:17:16.321798  284195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-624324
	I1019 17:17:16.343896  284195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/kindnet-624324/id_rsa Username:docker}
	I1019 17:17:16.445114  284195 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 17:17:16.450592  284195 start.go:128] duration metric: took 8.32032478s to createHost
	I1019 17:17:16.450620  284195 start.go:83] releasing machines lock for "kindnet-624324", held for 8.320477229s
	I1019 17:17:16.450711  284195 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-624324
	I1019 17:17:16.473032  284195 ssh_runner.go:195] Run: cat /version.json
	I1019 17:17:16.473103  284195 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 17:17:16.473129  284195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-624324
	I1019 17:17:16.473171  284195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-624324
	I1019 17:17:16.495216  284195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/kindnet-624324/id_rsa Username:docker}
	I1019 17:17:16.495240  284195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/kindnet-624324/id_rsa Username:docker}
	I1019 17:17:16.592529  284195 ssh_runner.go:195] Run: systemctl --version
	I1019 17:17:16.663777  284195 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 17:17:16.707738  284195 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 17:17:16.713595  284195 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 17:17:16.713682  284195 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 17:17:16.740983  284195 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1019 17:17:16.741004  284195 start.go:496] detecting cgroup driver to use...
	I1019 17:17:16.741032  284195 detect.go:190] detected "systemd" cgroup driver on host os
	I1019 17:17:16.741081  284195 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 17:17:16.757791  284195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 17:17:16.772284  284195 docker.go:218] disabling cri-docker service (if available) ...
	I1019 17:17:16.772348  284195 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 17:17:16.791264  284195 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 17:17:16.809534  284195 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 17:17:16.890052  284195 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 17:17:16.983470  284195 docker.go:234] disabling docker service ...
	I1019 17:17:16.983529  284195 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 17:17:17.002324  284195 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 17:17:17.016190  284195 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 17:17:17.114861  284195 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 17:17:17.206441  284195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 17:17:17.222730  284195 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 17:17:17.240434  284195 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 17:17:17.240507  284195 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:17:17.252391  284195 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1019 17:17:17.252471  284195 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:17:17.262248  284195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:17:17.272613  284195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:17:17.281637  284195 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 17:17:17.290764  284195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:17:17.299872  284195 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:17:17.313746  284195 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:17:17.322780  284195 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 17:17:17.330348  284195 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 17:17:17.337978  284195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:17:17.420377  284195 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 17:17:17.552141  284195 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 17:17:17.552216  284195 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 17:17:17.556491  284195 start.go:564] Will wait 60s for crictl version
	I1019 17:17:17.556540  284195 ssh_runner.go:195] Run: which crictl
	I1019 17:17:17.560782  284195 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 17:17:17.588875  284195 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 17:17:17.588962  284195 ssh_runner.go:195] Run: crio --version
	I1019 17:17:17.622448  284195 ssh_runner.go:195] Run: crio --version
	I1019 17:17:17.658763  284195 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1019 17:17:17.660284  284195 cli_runner.go:164] Run: docker network inspect kindnet-624324 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 17:17:17.679634  284195 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1019 17:17:17.684471  284195 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:17:17.701944  284195 kubeadm.go:884] updating cluster {Name:kindnet-624324 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-624324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 17:17:17.702274  284195 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:17:17.702348  284195 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:17:17.743014  284195 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:17:17.743037  284195 crio.go:433] Images already preloaded, skipping extraction
	I1019 17:17:17.743118  284195 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:17:17.768109  284195 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:17:17.768133  284195 cache_images.go:86] Images are preloaded, skipping loading
	I1019 17:17:17.768142  284195 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1019 17:17:17.768247  284195 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kindnet-624324 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:kindnet-624324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I1019 17:17:17.768342  284195 ssh_runner.go:195] Run: crio config
	I1019 17:17:17.816135  284195 cni.go:84] Creating CNI manager for "kindnet"
	I1019 17:17:17.816177  284195 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 17:17:17.816205  284195 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-624324 NodeName:kindnet-624324 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 17:17:17.816392  284195 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-624324"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 17:17:17.816479  284195 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 17:17:17.827056  284195 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 17:17:17.827151  284195 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 17:17:17.837743  284195 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (364 bytes)
	I1019 17:17:17.854668  284195 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 17:17:17.871385  284195 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1019 17:17:17.884648  284195 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1019 17:17:17.888754  284195 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:17:17.898862  284195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	W1019 17:17:13.468212  274481 pod_ready.go:104] pod "coredns-66bc5c9577-2r8tf" is not "Ready", error: <nil>
	W1019 17:17:15.967294  274481 pod_ready.go:104] pod "coredns-66bc5c9577-2r8tf" is not "Ready", error: <nil>
	I1019 17:17:18.431867  279986 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1019 17:17:18.431949  279986 kubeadm.go:319] [preflight] Running pre-flight checks
	I1019 17:17:18.432118  279986 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1019 17:17:18.432226  279986 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1041-gcp
	I1019 17:17:18.432287  279986 kubeadm.go:319] OS: Linux
	I1019 17:17:18.432358  279986 kubeadm.go:319] CGROUPS_CPU: enabled
	I1019 17:17:18.432425  279986 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1019 17:17:18.432491  279986 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1019 17:17:18.432561  279986 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1019 17:17:18.432621  279986 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1019 17:17:18.432674  279986 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1019 17:17:18.432738  279986 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1019 17:17:18.432784  279986 kubeadm.go:319] CGROUPS_IO: enabled
	I1019 17:17:18.432863  279986 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1019 17:17:18.432997  279986 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1019 17:17:18.433168  279986 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1019 17:17:18.433267  279986 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1019 17:17:18.434877  279986 out.go:252]   - Generating certificates and keys ...
	I1019 17:17:18.434965  279986 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1019 17:17:18.435057  279986 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1019 17:17:18.435197  279986 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1019 17:17:18.435256  279986 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1019 17:17:18.435329  279986 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1019 17:17:18.435399  279986 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1019 17:17:18.435467  279986 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1019 17:17:18.435619  279986 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-624324 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1019 17:17:18.435724  279986 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1019 17:17:18.435932  279986 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-624324 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1019 17:17:18.436028  279986 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1019 17:17:18.436160  279986 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1019 17:17:18.436215  279986 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1019 17:17:18.436284  279986 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1019 17:17:18.436360  279986 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1019 17:17:18.436459  279986 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1019 17:17:18.436546  279986 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1019 17:17:18.436654  279986 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1019 17:17:18.436732  279986 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1019 17:17:18.436861  279986 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1019 17:17:18.436969  279986 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1019 17:17:18.438585  279986 out.go:252]   - Booting up control plane ...
	I1019 17:17:18.438712  279986 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1019 17:17:18.438813  279986 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1019 17:17:18.438905  279986 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1019 17:17:18.439043  279986 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1019 17:17:18.439217  279986 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1019 17:17:18.439350  279986 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1019 17:17:18.439481  279986 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1019 17:17:18.439531  279986 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1019 17:17:18.439647  279986 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1019 17:17:18.439774  279986 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1019 17:17:18.439839  279986 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00164339s
	I1019 17:17:18.439961  279986 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1019 17:17:18.440093  279986 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1019 17:17:18.440241  279986 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1019 17:17:18.440344  279986 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1019 17:17:18.440422  279986 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.765800179s
	I1019 17:17:18.440499  279986 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.695365008s
	I1019 17:17:18.440559  279986 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501482623s
	I1019 17:17:18.440659  279986 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1019 17:17:18.440776  279986 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1019 17:17:18.440834  279986 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1019 17:17:18.441109  279986 kubeadm.go:319] [mark-control-plane] Marking the node auto-624324 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1019 17:17:18.441168  279986 kubeadm.go:319] [bootstrap-token] Using token: jhs3kv.w1yqlyxcfw05u8f4
	I1019 17:17:18.443121  279986 out.go:252]   - Configuring RBAC rules ...
	I1019 17:17:18.443216  279986 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1019 17:17:18.443345  279986 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1019 17:17:18.443543  279986 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1019 17:17:18.443732  279986 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1019 17:17:18.443892  279986 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1019 17:17:18.444050  279986 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1019 17:17:18.444213  279986 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1019 17:17:18.444277  279986 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1019 17:17:18.444349  279986 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1019 17:17:18.444359  279986 kubeadm.go:319] 
	I1019 17:17:18.444455  279986 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1019 17:17:18.444474  279986 kubeadm.go:319] 
	I1019 17:17:18.444566  279986 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1019 17:17:18.444578  279986 kubeadm.go:319] 
	I1019 17:17:18.444619  279986 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1019 17:17:18.444710  279986 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1019 17:17:18.444781  279986 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1019 17:17:18.444791  279986 kubeadm.go:319] 
	I1019 17:17:18.444878  279986 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1019 17:17:18.444890  279986 kubeadm.go:319] 
	I1019 17:17:18.444931  279986 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1019 17:17:18.444940  279986 kubeadm.go:319] 
	I1019 17:17:18.444994  279986 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1019 17:17:18.445115  279986 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1019 17:17:18.445219  279986 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1019 17:17:18.445232  279986 kubeadm.go:319] 
	I1019 17:17:18.445330  279986 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1019 17:17:18.445444  279986 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1019 17:17:18.445453  279986 kubeadm.go:319] 
	I1019 17:17:18.445558  279986 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token jhs3kv.w1yqlyxcfw05u8f4 \
	I1019 17:17:18.445669  279986 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:861d72ec119aad1188a44160bac431ea1f9fef8d67df56e962f86d9bbf64f1d3 \
	I1019 17:17:18.445703  279986 kubeadm.go:319] 	--control-plane 
	I1019 17:17:18.445716  279986 kubeadm.go:319] 
	I1019 17:17:18.445822  279986 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1019 17:17:18.445830  279986 kubeadm.go:319] 
	I1019 17:17:18.445935  279986 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token jhs3kv.w1yqlyxcfw05u8f4 \
	I1019 17:17:18.446095  279986 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:861d72ec119aad1188a44160bac431ea1f9fef8d67df56e962f86d9bbf64f1d3 
	I1019 17:17:18.446117  279986 cni.go:84] Creating CNI manager for ""
	I1019 17:17:18.446129  279986 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:17:18.448323  279986 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1019 17:17:18.449497  279986 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1019 17:17:18.454124  279986 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1019 17:17:18.454142  279986 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1019 17:17:18.469699  279986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1019 17:17:18.695481  279986 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1019 17:17:18.695539  279986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:17:18.695590  279986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-624324 minikube.k8s.io/updated_at=2025_10_19T17_17_18_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34 minikube.k8s.io/name=auto-624324 minikube.k8s.io/primary=true
	I1019 17:17:18.779735  279986 ops.go:34] apiserver oom_adj: -16
	I1019 17:17:18.779864  279986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:17:19.280273  279986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:17:19.780304  279986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:17:20.280725  279986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:17:20.779948  279986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:17:21.280292  279986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:17:21.780267  279986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:17:22.280245  279986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:17:17.985900  284195 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:17:18.009520  284195 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/kindnet-624324 for IP: 192.168.94.2
	I1019 17:17:18.009555  284195 certs.go:195] generating shared ca certs ...
	I1019 17:17:18.009574  284195 certs.go:227] acquiring lock for ca certs: {Name:mk4c4a2dfd94a54e3626e99fce4b6f5183eeaf4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:17:18.009753  284195 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-3731/.minikube/ca.key
	I1019 17:17:18.009795  284195 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-3731/.minikube/proxy-client-ca.key
	I1019 17:17:18.009807  284195 certs.go:257] generating profile certs ...
	I1019 17:17:18.009886  284195 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/kindnet-624324/client.key
	I1019 17:17:18.009909  284195 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/kindnet-624324/client.crt with IP's: []
	I1019 17:17:18.047243  284195 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/kindnet-624324/client.crt ...
	I1019 17:17:18.047288  284195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/kindnet-624324/client.crt: {Name:mk4f83d3317146f3a69a91e9c2e25c772b5846a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:17:18.047502  284195 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/kindnet-624324/client.key ...
	I1019 17:17:18.047517  284195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/kindnet-624324/client.key: {Name:mk0bf4a6486efc5c7bdd97d8013fa135dde0f437 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:17:18.047626  284195 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/kindnet-624324/apiserver.key.f95fb5fa
	I1019 17:17:18.047648  284195 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/kindnet-624324/apiserver.crt.f95fb5fa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1019 17:17:18.543932  284195 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/kindnet-624324/apiserver.crt.f95fb5fa ...
	I1019 17:17:18.543960  284195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/kindnet-624324/apiserver.crt.f95fb5fa: {Name:mkd2e3c36a12da51079c4c18ca9f93419cb75824 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:17:18.544174  284195 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/kindnet-624324/apiserver.key.f95fb5fa ...
	I1019 17:17:18.544192  284195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/kindnet-624324/apiserver.key.f95fb5fa: {Name:mk2f7dd6807ac203a87fa35802ac891d2c701900 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:17:18.544298  284195 certs.go:382] copying /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/kindnet-624324/apiserver.crt.f95fb5fa -> /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/kindnet-624324/apiserver.crt
	I1019 17:17:18.544422  284195 certs.go:386] copying /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/kindnet-624324/apiserver.key.f95fb5fa -> /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/kindnet-624324/apiserver.key
	I1019 17:17:18.544521  284195 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/kindnet-624324/proxy-client.key
	I1019 17:17:18.544541  284195 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/kindnet-624324/proxy-client.crt with IP's: []
	I1019 17:17:18.596758  284195 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/kindnet-624324/proxy-client.crt ...
	I1019 17:17:18.596786  284195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/kindnet-624324/proxy-client.crt: {Name:mkba0841fda4775d4d5e70d6c80cb8080b9ba0e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:17:18.596992  284195 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/kindnet-624324/proxy-client.key ...
	I1019 17:17:18.597007  284195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/kindnet-624324/proxy-client.key: {Name:mk36459cd4bff07ed5dece50bbeffb7dc8fb0574 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:17:18.597247  284195 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/7228.pem (1338 bytes)
	W1019 17:17:18.597285  284195 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-3731/.minikube/certs/7228_empty.pem, impossibly tiny 0 bytes
	I1019 17:17:18.597296  284195 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca-key.pem (1675 bytes)
	I1019 17:17:18.597325  284195 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem (1082 bytes)
	I1019 17:17:18.597356  284195 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/cert.pem (1123 bytes)
	I1019 17:17:18.597386  284195 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/key.pem (1679 bytes)
	I1019 17:17:18.597424  284195 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/files/etc/ssl/certs/72282.pem (1708 bytes)
	I1019 17:17:18.598215  284195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 17:17:18.617934  284195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1019 17:17:18.639287  284195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 17:17:18.658813  284195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1019 17:17:18.680699  284195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/kindnet-624324/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1019 17:17:18.703555  284195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/kindnet-624324/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1019 17:17:18.725613  284195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/kindnet-624324/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 17:17:18.752358  284195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/kindnet-624324/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1019 17:17:18.776748  284195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/certs/7228.pem --> /usr/share/ca-certificates/7228.pem (1338 bytes)
	I1019 17:17:18.801138  284195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/files/etc/ssl/certs/72282.pem --> /usr/share/ca-certificates/72282.pem (1708 bytes)
	I1019 17:17:18.821145  284195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 17:17:18.841542  284195 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 17:17:18.855339  284195 ssh_runner.go:195] Run: openssl version
	I1019 17:17:18.861675  284195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 17:17:18.870707  284195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:17:18.874706  284195 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 16:21 /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:17:18.874774  284195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:17:18.913860  284195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 17:17:18.923379  284195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7228.pem && ln -fs /usr/share/ca-certificates/7228.pem /etc/ssl/certs/7228.pem"
	I1019 17:17:18.932605  284195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7228.pem
	I1019 17:17:18.937171  284195 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 16:26 /usr/share/ca-certificates/7228.pem
	I1019 17:17:18.937226  284195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7228.pem
	I1019 17:17:18.974641  284195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7228.pem /etc/ssl/certs/51391683.0"
	I1019 17:17:18.983927  284195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/72282.pem && ln -fs /usr/share/ca-certificates/72282.pem /etc/ssl/certs/72282.pem"
	I1019 17:17:18.993000  284195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/72282.pem
	I1019 17:17:18.997287  284195 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 16:26 /usr/share/ca-certificates/72282.pem
	I1019 17:17:18.997347  284195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/72282.pem
	I1019 17:17:19.032130  284195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/72282.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 17:17:19.041512  284195 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 17:17:19.045776  284195 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1019 17:17:19.045832  284195 kubeadm.go:401] StartCluster: {Name:kindnet-624324 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-624324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:17:19.045917  284195 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 17:17:19.045989  284195 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 17:17:19.074264  284195 cri.go:89] found id: ""
	I1019 17:17:19.074338  284195 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 17:17:19.082812  284195 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1019 17:17:19.091229  284195 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1019 17:17:19.091288  284195 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1019 17:17:19.099227  284195 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1019 17:17:19.099246  284195 kubeadm.go:158] found existing configuration files:
	
	I1019 17:17:19.099295  284195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1019 17:17:19.107106  284195 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1019 17:17:19.107156  284195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1019 17:17:19.115165  284195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1019 17:17:19.123514  284195 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1019 17:17:19.123572  284195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1019 17:17:19.131204  284195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1019 17:17:19.139302  284195 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1019 17:17:19.139357  284195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1019 17:17:19.147017  284195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1019 17:17:19.154633  284195 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1019 17:17:19.154693  284195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1019 17:17:19.162194  284195 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1019 17:17:19.223749  284195 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1019 17:17:19.288213  284195 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	
	
	==> CRI-O <==
	Oct 19 17:16:46 embed-certs-090139 crio[564]: time="2025-10-19T17:16:46.269049443Z" level=info msg="Started container" PID=1742 containerID=9101226d2f073efa622c4868e85c8e8fd2db96dbf5f00cbc36ee69be2d09eaef description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pg7gk/dashboard-metrics-scraper id=5b62a28f-1b29-49cc-b887-38688f69cc5a name=/runtime.v1.RuntimeService/StartContainer sandboxID=3c3557327c37e444b32708db3fbfd9f8039c4dc4823ffcaca33c0f5a032a40f0
	Oct 19 17:16:47 embed-certs-090139 crio[564]: time="2025-10-19T17:16:47.230554127Z" level=info msg="Removing container: b217600ed8e25df530de92df958369f5d9a8afa646181e25fd5e596585b15954" id=a5a5eedd-f9f9-4f0e-8469-e8ae7ebc6040 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 17:16:47 embed-certs-090139 crio[564]: time="2025-10-19T17:16:47.247466658Z" level=info msg="Removed container b217600ed8e25df530de92df958369f5d9a8afa646181e25fd5e596585b15954: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pg7gk/dashboard-metrics-scraper" id=a5a5eedd-f9f9-4f0e-8469-e8ae7ebc6040 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 17:17:05 embed-certs-090139 crio[564]: time="2025-10-19T17:17:05.292026395Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=4de068c8-11ad-4347-8d63-ec85c1efc851 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:17:05 embed-certs-090139 crio[564]: time="2025-10-19T17:17:05.293029696Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=8e376e10-4c45-4c56-a7bd-a07c0ef333bd name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:17:05 embed-certs-090139 crio[564]: time="2025-10-19T17:17:05.294166635Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=3841b89b-806b-46e0-acd1-f7f64bc158f5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:17:05 embed-certs-090139 crio[564]: time="2025-10-19T17:17:05.294447095Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:17:05 embed-certs-090139 crio[564]: time="2025-10-19T17:17:05.299427147Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:17:05 embed-certs-090139 crio[564]: time="2025-10-19T17:17:05.299627339Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/1013ccf33efef85063a43b13879c29f417d9c16b61927574b4b50a2cd13c1122/merged/etc/passwd: no such file or directory"
	Oct 19 17:17:05 embed-certs-090139 crio[564]: time="2025-10-19T17:17:05.299667316Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/1013ccf33efef85063a43b13879c29f417d9c16b61927574b4b50a2cd13c1122/merged/etc/group: no such file or directory"
	Oct 19 17:17:05 embed-certs-090139 crio[564]: time="2025-10-19T17:17:05.3000104Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:17:05 embed-certs-090139 crio[564]: time="2025-10-19T17:17:05.328803028Z" level=info msg="Created container db2380c01b5a9e27881495bdbbb23cd4d9a4f1a24834f3b8b8bfeec346b8dcae: kube-system/storage-provisioner/storage-provisioner" id=3841b89b-806b-46e0-acd1-f7f64bc158f5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:17:05 embed-certs-090139 crio[564]: time="2025-10-19T17:17:05.32953414Z" level=info msg="Starting container: db2380c01b5a9e27881495bdbbb23cd4d9a4f1a24834f3b8b8bfeec346b8dcae" id=16a2cc02-2ff8-4718-8932-83053dbd6d95 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 17:17:05 embed-certs-090139 crio[564]: time="2025-10-19T17:17:05.332214241Z" level=info msg="Started container" PID=1756 containerID=db2380c01b5a9e27881495bdbbb23cd4d9a4f1a24834f3b8b8bfeec346b8dcae description=kube-system/storage-provisioner/storage-provisioner id=16a2cc02-2ff8-4718-8932-83053dbd6d95 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b14a17ab588b166a6522b565dfccd8b0c1aff548224a7a12a4f47f1a10327325
	Oct 19 17:17:08 embed-certs-090139 crio[564]: time="2025-10-19T17:17:08.138039045Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=62e7ecb2-bd97-411a-a76f-fb59e75d7729 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:17:08 embed-certs-090139 crio[564]: time="2025-10-19T17:17:08.139204802Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=39ef34c1-e5d7-4314-be3a-29c5e406cd2f name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:17:08 embed-certs-090139 crio[564]: time="2025-10-19T17:17:08.140407735Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pg7gk/dashboard-metrics-scraper" id=f1781bd7-bd32-489c-b5c5-cc242e8d5e21 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:17:08 embed-certs-090139 crio[564]: time="2025-10-19T17:17:08.140690914Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:17:08 embed-certs-090139 crio[564]: time="2025-10-19T17:17:08.146802727Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:17:08 embed-certs-090139 crio[564]: time="2025-10-19T17:17:08.147529561Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:17:08 embed-certs-090139 crio[564]: time="2025-10-19T17:17:08.177053656Z" level=info msg="Created container c41b3f083e0df7059c689e87000a2d83bb8eceaf028e0d55e1936c91a7f332f5: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pg7gk/dashboard-metrics-scraper" id=f1781bd7-bd32-489c-b5c5-cc242e8d5e21 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:17:08 embed-certs-090139 crio[564]: time="2025-10-19T17:17:08.177746538Z" level=info msg="Starting container: c41b3f083e0df7059c689e87000a2d83bb8eceaf028e0d55e1936c91a7f332f5" id=40dbbb07-4555-4fed-af6d-7711ef31f5a7 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 17:17:08 embed-certs-090139 crio[564]: time="2025-10-19T17:17:08.179861329Z" level=info msg="Started container" PID=1770 containerID=c41b3f083e0df7059c689e87000a2d83bb8eceaf028e0d55e1936c91a7f332f5 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pg7gk/dashboard-metrics-scraper id=40dbbb07-4555-4fed-af6d-7711ef31f5a7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3c3557327c37e444b32708db3fbfd9f8039c4dc4823ffcaca33c0f5a032a40f0
	Oct 19 17:17:08 embed-certs-090139 crio[564]: time="2025-10-19T17:17:08.30581667Z" level=info msg="Removing container: 9101226d2f073efa622c4868e85c8e8fd2db96dbf5f00cbc36ee69be2d09eaef" id=f7c94bc0-e118-41a6-8138-a4a9c966cd35 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 17:17:08 embed-certs-090139 crio[564]: time="2025-10-19T17:17:08.317230091Z" level=info msg="Removed container 9101226d2f073efa622c4868e85c8e8fd2db96dbf5f00cbc36ee69be2d09eaef: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pg7gk/dashboard-metrics-scraper" id=f7c94bc0-e118-41a6-8138-a4a9c966cd35 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	c41b3f083e0df       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           15 seconds ago      Exited              dashboard-metrics-scraper   2                   3c3557327c37e       dashboard-metrics-scraper-6ffb444bf9-pg7gk   kubernetes-dashboard
	db2380c01b5a9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           18 seconds ago      Running             storage-provisioner         1                   b14a17ab588b1       storage-provisioner                          kube-system
	39be18dfee0d9       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   41 seconds ago      Running             kubernetes-dashboard        0                   c70e8512092be       kubernetes-dashboard-855c9754f9-9d29n        kubernetes-dashboard
	8120323c27ea3       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           49 seconds ago      Running             busybox                     1                   0096a6550d887       busybox                                      default
	f28bfcad6c405       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           49 seconds ago      Running             coredns                     0                   d9b65e18ffea1       coredns-66bc5c9577-zw7d8                     kube-system
	032a52a687225       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           49 seconds ago      Running             kube-proxy                  0                   463821b71204f       kube-proxy-8f4lh                             kube-system
	2019570c30b89       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           49 seconds ago      Exited              storage-provisioner         0                   b14a17ab588b1       storage-provisioner                          kube-system
	0a03ae2cd978a       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           49 seconds ago      Running             kindnet-cni                 0                   40707355c2255       kindnet-dwsh7                                kube-system
	3c6fd3249cca2       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           52 seconds ago      Running             etcd                        0                   103b1f9af531a       etcd-embed-certs-090139                      kube-system
	8c97264fa8b22       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           52 seconds ago      Running             kube-apiserver              0                   b6f7c9e1a1deb       kube-apiserver-embed-certs-090139            kube-system
	7269af7f81934       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           52 seconds ago      Running             kube-scheduler              0                   021a7bc6fdc16       kube-scheduler-embed-certs-090139            kube-system
	d957ab9f5db99       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           52 seconds ago      Running             kube-controller-manager     0                   43c8d7de49ce8       kube-controller-manager-embed-certs-090139   kube-system
	
	
	==> coredns [f28bfcad6c405761f300339ad1d2a3ab9ac98c74395fd2d648954d7a5021f311] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38332 - 43849 "HINFO IN 5198447218566340739.7056095054614731733. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.072693336s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-090139
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-090139
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34
	                    minikube.k8s.io/name=embed-certs-090139
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T17_15_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 17:15:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-090139
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 17:17:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 17:17:04 +0000   Sun, 19 Oct 2025 17:15:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 17:17:04 +0000   Sun, 19 Oct 2025 17:15:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 17:17:04 +0000   Sun, 19 Oct 2025 17:15:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 17:17:04 +0000   Sun, 19 Oct 2025 17:15:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-090139
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                308b3de9-570c-4288-a8e0-c3790dfe5ce4
	  Boot ID:                    74c6737e-3c93-45ee-b810-4373d2d70b9b
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-66bc5c9577-zw7d8                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     103s
	  kube-system                 etcd-embed-certs-090139                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         109s
	  kube-system                 kindnet-dwsh7                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      103s
	  kube-system                 kube-apiserver-embed-certs-090139             250m (3%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-controller-manager-embed-certs-090139    200m (2%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-proxy-8f4lh                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-scheduler-embed-certs-090139             100m (1%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-pg7gk    0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-9d29n         0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 102s                 kube-proxy       
	  Normal  Starting                 49s                  kube-proxy       
	  Normal  Starting                 114s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  114s (x8 over 114s)  kubelet          Node embed-certs-090139 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    114s (x8 over 114s)  kubelet          Node embed-certs-090139 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     114s (x8 over 114s)  kubelet          Node embed-certs-090139 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    109s                 kubelet          Node embed-certs-090139 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  109s                 kubelet          Node embed-certs-090139 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     109s                 kubelet          Node embed-certs-090139 status is now: NodeHasSufficientPID
	  Normal  Starting                 109s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           104s                 node-controller  Node embed-certs-090139 event: Registered Node embed-certs-090139 in Controller
	  Normal  NodeReady                92s                  kubelet          Node embed-certs-090139 status is now: NodeReady
	  Normal  Starting                 53s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  53s (x8 over 53s)    kubelet          Node embed-certs-090139 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    53s (x8 over 53s)    kubelet          Node embed-certs-090139 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     53s (x8 over 53s)    kubelet          Node embed-certs-090139 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           48s                  node-controller  Node embed-certs-090139 event: Registered Node embed-certs-090139 in Controller
	
	
	==> dmesg <==
	[  +0.102617] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027869] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.458770] kauditd_printk_skb: 47 callbacks suppressed
	[Oct19 16:23] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.054620] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023908] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023878] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023848] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023943] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +2.047764] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +4.031517] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +8.127172] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[ +16.382276] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[Oct19 16:24] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	
	
	==> etcd [3c6fd3249cca231ede96171d1c7342f490e2c1970dd6df69631ba08bbac70dda] <==
	{"level":"warn","ts":"2025-10-19T17:16:32.811785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:32.818857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:32.825457Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:32.831846Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:32.841502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:32.847969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:32.854840Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:32.862220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:32.869170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:32.876527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:32.886264Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:32.892095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:32.899340Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:32.905898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:32.912469Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:32.918732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:32.926511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:32.932881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:32.940148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:32.955873Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:32.967250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:33.019042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48188","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-19T17:16:43.366277Z","caller":"traceutil/trace.go:172","msg":"trace[279885890] transaction","detail":"{read_only:false; response_revision:608; number_of_response:1; }","duration":"103.025414ms","start":"2025-10-19T17:16:43.263231Z","end":"2025-10-19T17:16:43.366256Z","steps":["trace[279885890] 'process raft request'  (duration: 78.883359ms)","trace[279885890] 'compare'  (duration: 23.909272ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-19T17:16:43.374840Z","caller":"traceutil/trace.go:172","msg":"trace[50804672] transaction","detail":"{read_only:false; response_revision:610; number_of_response:1; }","duration":"108.667413ms","start":"2025-10-19T17:16:43.266151Z","end":"2025-10-19T17:16:43.374818Z","steps":["trace[50804672] 'process raft request'  (duration: 108.599902ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T17:16:43.374875Z","caller":"traceutil/trace.go:172","msg":"trace[1630067863] transaction","detail":"{read_only:false; response_revision:609; number_of_response:1; }","duration":"109.091739ms","start":"2025-10-19T17:16:43.265760Z","end":"2025-10-19T17:16:43.374851Z","steps":["trace[1630067863] 'process raft request'  (duration: 108.884222ms)"],"step_count":1}
	
	
	==> kernel <==
	 17:17:24 up 59 min,  0 user,  load average: 4.34, 3.33, 2.03
	Linux embed-certs-090139 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0a03ae2cd978a67ae2325f57237113942f56a65c39a49b00b59543933475e052] <==
	I1019 17:16:34.661339       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 17:16:34.661577       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1019 17:16:34.661746       1 main.go:148] setting mtu 1500 for CNI 
	I1019 17:16:34.661766       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 17:16:34.661791       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T17:16:34Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 17:16:34.900476       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 17:16:34.900537       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 17:16:34.900554       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 17:16:34.998565       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1019 17:16:35.397784       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 17:16:35.397822       1 metrics.go:72] Registering metrics
	I1019 17:16:35.397892       1 controller.go:711] "Syncing nftables rules"
	I1019 17:16:44.900288       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1019 17:16:44.900351       1 main.go:301] handling current node
	I1019 17:16:54.903526       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1019 17:16:54.903572       1 main.go:301] handling current node
	I1019 17:17:04.900444       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1019 17:17:04.900514       1 main.go:301] handling current node
	I1019 17:17:14.905162       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1019 17:17:14.905205       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8c97264fa8b225a884756f7a5ec2d9e5e99aa8adb8765570ed3a783b339f1d85] <==
	I1019 17:16:33.523971       1 aggregator.go:171] initial CRD sync complete...
	I1019 17:16:33.523981       1 autoregister_controller.go:144] Starting autoregister controller
	I1019 17:16:33.523987       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1019 17:16:33.523995       1 cache.go:39] Caches are synced for autoregister controller
	I1019 17:16:33.523521       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1019 17:16:33.523184       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1019 17:16:33.523512       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1019 17:16:33.523532       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1019 17:16:33.523648       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1019 17:16:33.533762       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1019 17:16:33.535714       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1019 17:16:33.541859       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1019 17:16:33.543149       1 policy_source.go:240] refreshing policies
	I1019 17:16:33.568232       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 17:16:33.841759       1 controller.go:667] quota admission added evaluator for: namespaces
	I1019 17:16:33.872646       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1019 17:16:33.896680       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 17:16:33.903967       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 17:16:33.911430       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 17:16:33.950991       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.235.194"}
	I1019 17:16:33.961891       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.14.128"}
	I1019 17:16:34.426401       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 17:16:37.224433       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1019 17:16:37.422601       1 controller.go:667] quota admission added evaluator for: endpoints
	I1019 17:16:37.470365       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [d957ab9f5db999a8e3d596f5eb09406aefbc41ab698ebeda9c2f79b429ea08a0] <==
	I1019 17:16:36.840157       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1019 17:16:36.840163       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1019 17:16:36.841214       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1019 17:16:36.843465       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1019 17:16:36.867010       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1019 17:16:36.867034       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1019 17:16:36.867058       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1019 17:16:36.867102       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1019 17:16:36.867215       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 17:16:36.867002       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1019 17:16:36.867230       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1019 17:16:36.867239       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1019 17:16:36.867076       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1019 17:16:36.867457       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1019 17:16:36.867547       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1019 17:16:36.868464       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1019 17:16:36.868504       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1019 17:16:36.868510       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1019 17:16:36.868518       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1019 17:16:36.868522       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1019 17:16:36.872118       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 17:16:36.872916       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1019 17:16:36.876421       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1019 17:16:36.893131       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 17:16:36.898375       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [032a52a6872256e9477b486431f9879e94f744c0af17fc0c51bc366d518fd888] <==
	I1019 17:16:34.527893       1 server_linux.go:53] "Using iptables proxy"
	I1019 17:16:34.586140       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 17:16:34.686942       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 17:16:34.686979       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1019 17:16:34.687083       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 17:16:34.705183       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 17:16:34.705230       1 server_linux.go:132] "Using iptables Proxier"
	I1019 17:16:34.710084       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 17:16:34.710442       1 server.go:527] "Version info" version="v1.34.1"
	I1019 17:16:34.710471       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:16:34.712636       1 config.go:200] "Starting service config controller"
	I1019 17:16:34.712722       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 17:16:34.712770       1 config.go:309] "Starting node config controller"
	I1019 17:16:34.712785       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 17:16:34.712801       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 17:16:34.712869       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 17:16:34.712881       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 17:16:34.712908       1 config.go:106] "Starting endpoint slice config controller"
	I1019 17:16:34.712918       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 17:16:34.813764       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1019 17:16:34.813788       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 17:16:34.813844       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [7269af7f81934d889a105bbbc2b1ebea2710e7a60bf8ecc35fb25c89f259a974] <==
	I1019 17:16:31.859412       1 serving.go:386] Generated self-signed cert in-memory
	I1019 17:16:33.533170       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1019 17:16:33.533205       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:16:33.539845       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1019 17:16:33.539894       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1019 17:16:33.539910       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1019 17:16:33.539921       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1019 17:16:33.540379       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1019 17:16:33.540409       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1019 17:16:33.541398       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 17:16:33.541493       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 17:16:33.641110       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1019 17:16:33.641255       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1019 17:16:33.641611       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 19 17:16:37 embed-certs-090139 kubelet[721]: E1019 17:16:37.403847     721 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-090139\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'embed-certs-090139' and this object" logger="UnhandledError" reflector="object-\"kubernetes-dashboard\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Oct 19 17:16:37 embed-certs-090139 kubelet[721]: I1019 17:16:37.485131     721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8667f19d-4c29-4376-8168-ba8ac48bde56-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-9d29n\" (UID: \"8667f19d-4c29-4376-8168-ba8ac48bde56\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9d29n"
	Oct 19 17:16:37 embed-certs-090139 kubelet[721]: I1019 17:16:37.485194     721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8btf6\" (UniqueName: \"kubernetes.io/projected/8667f19d-4c29-4376-8168-ba8ac48bde56-kube-api-access-8btf6\") pod \"kubernetes-dashboard-855c9754f9-9d29n\" (UID: \"8667f19d-4c29-4376-8168-ba8ac48bde56\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9d29n"
	Oct 19 17:16:37 embed-certs-090139 kubelet[721]: I1019 17:16:37.485293     721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/bf8c9703-2a15-44c7-860b-d6bb1f5f2cd5-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-pg7gk\" (UID: \"bf8c9703-2a15-44c7-860b-d6bb1f5f2cd5\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pg7gk"
	Oct 19 17:16:37 embed-certs-090139 kubelet[721]: I1019 17:16:37.485346     721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmxs2\" (UniqueName: \"kubernetes.io/projected/bf8c9703-2a15-44c7-860b-d6bb1f5f2cd5-kube-api-access-vmxs2\") pod \"dashboard-metrics-scraper-6ffb444bf9-pg7gk\" (UID: \"bf8c9703-2a15-44c7-860b-d6bb1f5f2cd5\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pg7gk"
	Oct 19 17:16:45 embed-certs-090139 kubelet[721]: I1019 17:16:45.668760     721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9d29n" podStartSLOduration=4.395073606 podStartE2EDuration="8.668734463s" podCreationTimestamp="2025-10-19 17:16:37 +0000 UTC" firstStartedPulling="2025-10-19 17:16:38.331489758 +0000 UTC m=+7.303151252" lastFinishedPulling="2025-10-19 17:16:42.605150618 +0000 UTC m=+11.576812109" observedRunningTime="2025-10-19 17:16:43.261373569 +0000 UTC m=+12.233035075" watchObservedRunningTime="2025-10-19 17:16:45.668734463 +0000 UTC m=+14.640395967"
	Oct 19 17:16:46 embed-certs-090139 kubelet[721]: I1019 17:16:46.222044     721 scope.go:117] "RemoveContainer" containerID="b217600ed8e25df530de92df958369f5d9a8afa646181e25fd5e596585b15954"
	Oct 19 17:16:47 embed-certs-090139 kubelet[721]: I1019 17:16:47.228211     721 scope.go:117] "RemoveContainer" containerID="b217600ed8e25df530de92df958369f5d9a8afa646181e25fd5e596585b15954"
	Oct 19 17:16:47 embed-certs-090139 kubelet[721]: I1019 17:16:47.228613     721 scope.go:117] "RemoveContainer" containerID="9101226d2f073efa622c4868e85c8e8fd2db96dbf5f00cbc36ee69be2d09eaef"
	Oct 19 17:16:47 embed-certs-090139 kubelet[721]: E1019 17:16:47.228838     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pg7gk_kubernetes-dashboard(bf8c9703-2a15-44c7-860b-d6bb1f5f2cd5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pg7gk" podUID="bf8c9703-2a15-44c7-860b-d6bb1f5f2cd5"
	Oct 19 17:16:48 embed-certs-090139 kubelet[721]: I1019 17:16:48.238212     721 scope.go:117] "RemoveContainer" containerID="9101226d2f073efa622c4868e85c8e8fd2db96dbf5f00cbc36ee69be2d09eaef"
	Oct 19 17:16:48 embed-certs-090139 kubelet[721]: E1019 17:16:48.238460     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pg7gk_kubernetes-dashboard(bf8c9703-2a15-44c7-860b-d6bb1f5f2cd5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pg7gk" podUID="bf8c9703-2a15-44c7-860b-d6bb1f5f2cd5"
	Oct 19 17:16:54 embed-certs-090139 kubelet[721]: I1019 17:16:54.897244     721 scope.go:117] "RemoveContainer" containerID="9101226d2f073efa622c4868e85c8e8fd2db96dbf5f00cbc36ee69be2d09eaef"
	Oct 19 17:16:54 embed-certs-090139 kubelet[721]: E1019 17:16:54.897507     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pg7gk_kubernetes-dashboard(bf8c9703-2a15-44c7-860b-d6bb1f5f2cd5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pg7gk" podUID="bf8c9703-2a15-44c7-860b-d6bb1f5f2cd5"
	Oct 19 17:17:05 embed-certs-090139 kubelet[721]: I1019 17:17:05.291502     721 scope.go:117] "RemoveContainer" containerID="2019570c30b89ab8c351e4d64d6ddd8cc33437e4b912376c44b0d230f8bce722"
	Oct 19 17:17:08 embed-certs-090139 kubelet[721]: I1019 17:17:08.137392     721 scope.go:117] "RemoveContainer" containerID="9101226d2f073efa622c4868e85c8e8fd2db96dbf5f00cbc36ee69be2d09eaef"
	Oct 19 17:17:08 embed-certs-090139 kubelet[721]: I1019 17:17:08.304429     721 scope.go:117] "RemoveContainer" containerID="9101226d2f073efa622c4868e85c8e8fd2db96dbf5f00cbc36ee69be2d09eaef"
	Oct 19 17:17:08 embed-certs-090139 kubelet[721]: I1019 17:17:08.304670     721 scope.go:117] "RemoveContainer" containerID="c41b3f083e0df7059c689e87000a2d83bb8eceaf028e0d55e1936c91a7f332f5"
	Oct 19 17:17:08 embed-certs-090139 kubelet[721]: E1019 17:17:08.304886     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pg7gk_kubernetes-dashboard(bf8c9703-2a15-44c7-860b-d6bb1f5f2cd5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pg7gk" podUID="bf8c9703-2a15-44c7-860b-d6bb1f5f2cd5"
	Oct 19 17:17:14 embed-certs-090139 kubelet[721]: I1019 17:17:14.897015     721 scope.go:117] "RemoveContainer" containerID="c41b3f083e0df7059c689e87000a2d83bb8eceaf028e0d55e1936c91a7f332f5"
	Oct 19 17:17:14 embed-certs-090139 kubelet[721]: E1019 17:17:14.897251     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pg7gk_kubernetes-dashboard(bf8c9703-2a15-44c7-860b-d6bb1f5f2cd5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pg7gk" podUID="bf8c9703-2a15-44c7-860b-d6bb1f5f2cd5"
	Oct 19 17:17:20 embed-certs-090139 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 19 17:17:20 embed-certs-090139 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 19 17:17:20 embed-certs-090139 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 19 17:17:20 embed-certs-090139 systemd[1]: kubelet.service: Consumed 1.705s CPU time.
	
	
	==> kubernetes-dashboard [39be18dfee0d94b64b58273530929a496bf5ad72be01310a470fdbb249d21670] <==
	2025/10/19 17:16:42 Starting overwatch
	2025/10/19 17:16:42 Using namespace: kubernetes-dashboard
	2025/10/19 17:16:42 Using in-cluster config to connect to apiserver
	2025/10/19 17:16:42 Using secret token for csrf signing
	2025/10/19 17:16:42 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/19 17:16:42 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/19 17:16:42 Successful initial request to the apiserver, version: v1.34.1
	2025/10/19 17:16:42 Generating JWE encryption key
	2025/10/19 17:16:42 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/19 17:16:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/19 17:16:42 Initializing JWE encryption key from synchronized object
	2025/10/19 17:16:42 Creating in-cluster Sidecar client
	2025/10/19 17:16:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/19 17:16:42 Serving insecurely on HTTP port: 9090
	2025/10/19 17:17:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [2019570c30b89ab8c351e4d64d6ddd8cc33437e4b912376c44b0d230f8bce722] <==
	I1019 17:16:34.488223       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1019 17:17:04.490483       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [db2380c01b5a9e27881495bdbbb23cd4d9a4f1a24834f3b8b8bfeec346b8dcae] <==
	I1019 17:17:05.351830       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1019 17:17:05.363016       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1019 17:17:05.363047       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1019 17:17:05.366281       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:17:08.822244       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:17:13.083038       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:17:16.681723       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:17:19.735450       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:17:22.757961       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:17:22.763754       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 17:17:22.763919       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1019 17:17:22.764029       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"abf9a435-53d4-45a2-bf52-58f629c09914", APIVersion:"v1", ResourceVersion:"664", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-090139_f31ec667-7599-40ec-ba94-bfbd0834bc1c became leader
	I1019 17:17:22.764211       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-090139_f31ec667-7599-40ec-ba94-bfbd0834bc1c!
	W1019 17:17:22.766630       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:17:22.778338       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 17:17:22.864819       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-090139_f31ec667-7599-40ec-ba94-bfbd0834bc1c!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-090139 -n embed-certs-090139
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-090139 -n embed-certs-090139: exit status 2 (338.436894ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-090139 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-090139
helpers_test.go:243: (dbg) docker inspect embed-certs-090139:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "491b138dfd3b0f8450a43a099d1e6d3f34448c3605ec02b7f98ffd5aefd0c3d3",
	        "Created": "2025-10-19T17:15:20.164222926Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 269072,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T17:16:24.786634875Z",
	            "FinishedAt": "2025-10-19T17:16:23.947799143Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/491b138dfd3b0f8450a43a099d1e6d3f34448c3605ec02b7f98ffd5aefd0c3d3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/491b138dfd3b0f8450a43a099d1e6d3f34448c3605ec02b7f98ffd5aefd0c3d3/hostname",
	        "HostsPath": "/var/lib/docker/containers/491b138dfd3b0f8450a43a099d1e6d3f34448c3605ec02b7f98ffd5aefd0c3d3/hosts",
	        "LogPath": "/var/lib/docker/containers/491b138dfd3b0f8450a43a099d1e6d3f34448c3605ec02b7f98ffd5aefd0c3d3/491b138dfd3b0f8450a43a099d1e6d3f34448c3605ec02b7f98ffd5aefd0c3d3-json.log",
	        "Name": "/embed-certs-090139",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-090139:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-090139",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "491b138dfd3b0f8450a43a099d1e6d3f34448c3605ec02b7f98ffd5aefd0c3d3",
	                "LowerDir": "/var/lib/docker/overlay2/adea2bc670d3c2f94262acc648cd1d97c1ba620ee9d7f9af5505590dd624f110-init/diff:/var/lib/docker/overlay2/0516a6de822f96a429e000bb6523437ca93a66a6aecaba561c6abf45d0d1defe/diff",
	                "MergedDir": "/var/lib/docker/overlay2/adea2bc670d3c2f94262acc648cd1d97c1ba620ee9d7f9af5505590dd624f110/merged",
	                "UpperDir": "/var/lib/docker/overlay2/adea2bc670d3c2f94262acc648cd1d97c1ba620ee9d7f9af5505590dd624f110/diff",
	                "WorkDir": "/var/lib/docker/overlay2/adea2bc670d3c2f94262acc648cd1d97c1ba620ee9d7f9af5505590dd624f110/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-090139",
	                "Source": "/var/lib/docker/volumes/embed-certs-090139/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-090139",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-090139",
	                "name.minikube.sigs.k8s.io": "embed-certs-090139",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f9a51b9c047845ad7b5390c71c1f39794dc5f446cf895f8b70aa9ff12768bad1",
	            "SandboxKey": "/var/run/docker/netns/f9a51b9c0478",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-090139": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8a:3e:6e:ef:ec:38",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9f3b41047906a4786b547f272192944794206cd82d35412a1c4498289619b68a",
	                    "EndpointID": "23d8a85c3c449fc368daa01d1bf0d44d13c92115cac370611ee2fe236baca9b5",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-090139",
	                        "491b138dfd3b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-090139 -n embed-certs-090139
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-090139 -n embed-certs-090139: exit status 2 (336.158599ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-090139 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-090139 logs -n 25: (1.335922186s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p no-preload-806996                                                                                                                                                                                                                          │ no-preload-806996            │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ start   │ -p newest-cni-848035 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-848035            │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:16 UTC │
	│ addons  │ enable metrics-server -p embed-certs-090139 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-090139           │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ stop    │ -p embed-certs-090139 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-090139           │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:16 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-663015 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-663015 │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-663015 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-663015 │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:16 UTC │
	│ addons  │ enable dashboard -p embed-certs-090139 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-090139           │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:16 UTC │
	│ start   │ -p embed-certs-090139 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-090139           │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:17 UTC │
	│ addons  │ enable metrics-server -p newest-cni-848035 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-848035            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ stop    │ -p newest-cni-848035 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-848035            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:16 UTC │
	│ addons  │ enable dashboard -p newest-cni-848035 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-848035            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:16 UTC │
	│ start   │ -p newest-cni-848035 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-848035            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:16 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-663015 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-663015 │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:16 UTC │
	│ start   │ -p default-k8s-diff-port-663015 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-663015 │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ image   │ newest-cni-848035 image list --format=json                                                                                                                                                                                                    │ newest-cni-848035            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:16 UTC │
	│ pause   │ -p newest-cni-848035 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-848035            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ delete  │ -p newest-cni-848035                                                                                                                                                                                                                          │ newest-cni-848035            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:16 UTC │
	│ delete  │ -p newest-cni-848035                                                                                                                                                                                                                          │ newest-cni-848035            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:16 UTC │
	│ start   │ -p auto-624324 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-624324                  │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ start   │ -p kubernetes-upgrade-318879 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-318879    │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ start   │ -p kubernetes-upgrade-318879 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-318879    │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:17 UTC │
	│ delete  │ -p kubernetes-upgrade-318879                                                                                                                                                                                                                  │ kubernetes-upgrade-318879    │ jenkins │ v1.37.0 │ 19 Oct 25 17:17 UTC │ 19 Oct 25 17:17 UTC │
	│ start   │ -p kindnet-624324 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio                                                                                                      │ kindnet-624324               │ jenkins │ v1.37.0 │ 19 Oct 25 17:17 UTC │                     │
	│ image   │ embed-certs-090139 image list --format=json                                                                                                                                                                                                   │ embed-certs-090139           │ jenkins │ v1.37.0 │ 19 Oct 25 17:17 UTC │ 19 Oct 25 17:17 UTC │
	│ pause   │ -p embed-certs-090139 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-090139           │ jenkins │ v1.37.0 │ 19 Oct 25 17:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 17:17:07
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 17:17:07.911145  284195 out.go:360] Setting OutFile to fd 1 ...
	I1019 17:17:07.911485  284195 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:17:07.911498  284195 out.go:374] Setting ErrFile to fd 2...
	I1019 17:17:07.911504  284195 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:17:07.911744  284195 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3731/.minikube/bin
	I1019 17:17:07.912432  284195 out.go:368] Setting JSON to false
	I1019 17:17:07.913922  284195 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3574,"bootTime":1760890654,"procs":358,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 17:17:07.914040  284195 start.go:143] virtualization: kvm guest
	I1019 17:17:07.916449  284195 out.go:179] * [kindnet-624324] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 17:17:07.918177  284195 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 17:17:07.918173  284195 notify.go:221] Checking for updates...
	I1019 17:17:07.919770  284195 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 17:17:07.921268  284195 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-3731/kubeconfig
	I1019 17:17:07.922623  284195 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-3731/.minikube
	I1019 17:17:07.924398  284195 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 17:17:07.926316  284195 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 17:17:07.928030  284195 config.go:182] Loaded profile config "auto-624324": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:17:07.928161  284195 config.go:182] Loaded profile config "default-k8s-diff-port-663015": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:17:07.928272  284195 config.go:182] Loaded profile config "embed-certs-090139": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:17:07.928385  284195 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 17:17:07.955289  284195 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1019 17:17:07.955374  284195 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:17:08.021597  284195 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-19 17:17:08.010265899 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 17:17:08.021703  284195 docker.go:319] overlay module found
	I1019 17:17:08.023523  284195 out.go:179] * Using the docker driver based on user configuration
	I1019 17:17:08.024696  284195 start.go:309] selected driver: docker
	I1019 17:17:08.024713  284195 start.go:930] validating driver "docker" against <nil>
	I1019 17:17:08.024724  284195 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 17:17:08.025299  284195 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:17:08.095598  284195 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-19 17:17:08.082271965 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 17:17:08.095740  284195 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1019 17:17:08.095956  284195 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 17:17:08.097623  284195 out.go:179] * Using Docker driver with root privileges
	I1019 17:17:08.098923  284195 cni.go:84] Creating CNI manager for "kindnet"
	I1019 17:17:08.098952  284195 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1019 17:17:08.099199  284195 start.go:353] cluster config:
	{Name:kindnet-624324 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-624324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:17:08.101273  284195 out.go:179] * Starting "kindnet-624324" primary control-plane node in "kindnet-624324" cluster
	I1019 17:17:08.102517  284195 cache.go:124] Beginning downloading kic base image for docker with crio
	I1019 17:17:08.103771  284195 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 17:17:08.105096  284195 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:17:08.105143  284195 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 17:17:08.105147  284195 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-3731/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1019 17:17:08.105160  284195 cache.go:59] Caching tarball of preloaded images
	I1019 17:17:08.105264  284195 preload.go:233] Found /home/jenkins/minikube-integration/21683-3731/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1019 17:17:08.105288  284195 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 17:17:08.105403  284195 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/kindnet-624324/config.json ...
	I1019 17:17:08.105428  284195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/kindnet-624324/config.json: {Name:mk605196bae1a2f9aab06ad07829a616de1a599f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:17:08.129932  284195 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 17:17:08.129955  284195 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 17:17:08.129973  284195 cache.go:233] Successfully downloaded all kic artifacts
	I1019 17:17:08.130004  284195 start.go:360] acquireMachinesLock for kindnet-624324: {Name:mk2a20d18414afeb65441c6d6d63ed8b022dba64 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 17:17:08.130127  284195 start.go:364] duration metric: took 99.58µs to acquireMachinesLock for "kindnet-624324"
	I1019 17:17:08.130157  284195 start.go:93] Provisioning new machine with config: &{Name:kindnet-624324 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-624324 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 17:17:08.130245  284195 start.go:125] createHost starting for "" (driver="docker")
	W1019 17:17:03.967488  274481 pod_ready.go:104] pod "coredns-66bc5c9577-2r8tf" is not "Ready", error: <nil>
	W1019 17:17:05.968220  274481 pod_ready.go:104] pod "coredns-66bc5c9577-2r8tf" is not "Ready", error: <nil>
	W1019 17:17:05.092522  268862 pod_ready.go:104] pod "coredns-66bc5c9577-zw7d8" is not "Ready", error: <nil>
	I1019 17:17:07.092751  268862 pod_ready.go:94] pod "coredns-66bc5c9577-zw7d8" is "Ready"
	I1019 17:17:07.092783  268862 pod_ready.go:86] duration metric: took 32.005446848s for pod "coredns-66bc5c9577-zw7d8" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:17:07.095535  268862 pod_ready.go:83] waiting for pod "etcd-embed-certs-090139" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:17:07.099757  268862 pod_ready.go:94] pod "etcd-embed-certs-090139" is "Ready"
	I1019 17:17:07.099784  268862 pod_ready.go:86] duration metric: took 4.218129ms for pod "etcd-embed-certs-090139" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:17:07.101852  268862 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-090139" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:17:07.105603  268862 pod_ready.go:94] pod "kube-apiserver-embed-certs-090139" is "Ready"
	I1019 17:17:07.105627  268862 pod_ready.go:86] duration metric: took 3.749516ms for pod "kube-apiserver-embed-certs-090139" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:17:07.107509  268862 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-090139" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:17:07.291717  268862 pod_ready.go:94] pod "kube-controller-manager-embed-certs-090139" is "Ready"
	I1019 17:17:07.291746  268862 pod_ready.go:86] duration metric: took 184.216962ms for pod "kube-controller-manager-embed-certs-090139" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:17:07.490889  268862 pod_ready.go:83] waiting for pod "kube-proxy-8f4lh" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:17:07.891341  268862 pod_ready.go:94] pod "kube-proxy-8f4lh" is "Ready"
	I1019 17:17:07.891372  268862 pod_ready.go:86] duration metric: took 400.457732ms for pod "kube-proxy-8f4lh" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:17:08.091752  268862 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-090139" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:17:08.491497  268862 pod_ready.go:94] pod "kube-scheduler-embed-certs-090139" is "Ready"
	I1019 17:17:08.491528  268862 pod_ready.go:86] duration metric: took 399.738799ms for pod "kube-scheduler-embed-certs-090139" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:17:08.491543  268862 pod_ready.go:40] duration metric: took 33.407732288s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 17:17:08.547078  268862 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1019 17:17:08.549485  268862 out.go:179] * Done! kubectl is now configured to use "embed-certs-090139" cluster and "default" namespace by default
	I1019 17:17:07.713638  279986 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 17:17:07.728095  279986 ssh_runner.go:195] Run: openssl version
	I1019 17:17:07.735460  279986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/72282.pem && ln -fs /usr/share/ca-certificates/72282.pem /etc/ssl/certs/72282.pem"
	I1019 17:17:07.744817  279986 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/72282.pem
	I1019 17:17:07.749085  279986 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 16:26 /usr/share/ca-certificates/72282.pem
	I1019 17:17:07.749148  279986 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/72282.pem
	I1019 17:17:07.785565  279986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/72282.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 17:17:07.794701  279986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 17:17:07.803505  279986 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:17:07.807525  279986 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 16:21 /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:17:07.807613  279986 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:17:07.846607  279986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 17:17:07.855942  279986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7228.pem && ln -fs /usr/share/ca-certificates/7228.pem /etc/ssl/certs/7228.pem"
	I1019 17:17:07.864812  279986 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7228.pem
	I1019 17:17:07.869099  279986 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 16:26 /usr/share/ca-certificates/7228.pem
	I1019 17:17:07.869159  279986 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7228.pem
	I1019 17:17:07.909215  279986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7228.pem /etc/ssl/certs/51391683.0"
	I1019 17:17:07.919715  279986 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 17:17:07.923900  279986 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1019 17:17:07.923974  279986 kubeadm.go:401] StartCluster: {Name:auto-624324 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-624324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:17:07.924074  279986 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 17:17:07.924137  279986 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 17:17:07.954304  279986 cri.go:89] found id: ""
	I1019 17:17:07.954371  279986 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 17:17:07.963082  279986 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1019 17:17:07.973291  279986 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1019 17:17:07.973354  279986 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1019 17:17:07.984866  279986 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1019 17:17:07.984885  279986 kubeadm.go:158] found existing configuration files:
	
	I1019 17:17:07.984943  279986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1019 17:17:07.995186  279986 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1019 17:17:07.995249  279986 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1019 17:17:08.004930  279986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1019 17:17:08.014144  279986 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1019 17:17:08.014214  279986 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1019 17:17:08.022823  279986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1019 17:17:08.031130  279986 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1019 17:17:08.031192  279986 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1019 17:17:08.038841  279986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1019 17:17:08.049250  279986 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1019 17:17:08.049306  279986 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1019 17:17:08.058309  279986 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1019 17:17:08.127897  279986 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1019 17:17:08.204987  279986 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1019 17:17:08.132894  284195 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1019 17:17:08.133202  284195 start.go:159] libmachine.API.Create for "kindnet-624324" (driver="docker")
	I1019 17:17:08.133239  284195 client.go:171] LocalClient.Create starting
	I1019 17:17:08.133308  284195 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem
	I1019 17:17:08.133345  284195 main.go:143] libmachine: Decoding PEM data...
	I1019 17:17:08.133366  284195 main.go:143] libmachine: Parsing certificate...
	I1019 17:17:08.133448  284195 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-3731/.minikube/certs/cert.pem
	I1019 17:17:08.133485  284195 main.go:143] libmachine: Decoding PEM data...
	I1019 17:17:08.133501  284195 main.go:143] libmachine: Parsing certificate...
	I1019 17:17:08.133954  284195 cli_runner.go:164] Run: docker network inspect kindnet-624324 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1019 17:17:08.155830  284195 cli_runner.go:211] docker network inspect kindnet-624324 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1019 17:17:08.155936  284195 network_create.go:284] running [docker network inspect kindnet-624324] to gather additional debugging logs...
	I1019 17:17:08.155963  284195 cli_runner.go:164] Run: docker network inspect kindnet-624324
	W1019 17:17:08.178825  284195 cli_runner.go:211] docker network inspect kindnet-624324 returned with exit code 1
	I1019 17:17:08.178879  284195 network_create.go:287] error running [docker network inspect kindnet-624324]: docker network inspect kindnet-624324: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-624324 not found
	I1019 17:17:08.178900  284195 network_create.go:289] output of [docker network inspect kindnet-624324]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-624324 not found
	
	** /stderr **
	I1019 17:17:08.179015  284195 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 17:17:08.201485  284195 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-96cf7041f267 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:7a:ea:91:e3:37:25} reservation:<nil>}
	I1019 17:17:08.202620  284195 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-0f2c415cfca9 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:96:f0:8a:e9:5f:de} reservation:<nil>}
	I1019 17:17:08.203616  284195 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ca739aebb768 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:a6:81:0d:b3:5e:ec} reservation:<nil>}
	I1019 17:17:08.204577  284195 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-a9c8e7e3ba20 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:3e:77:c0:aa:7f:5e} reservation:<nil>}
	I1019 17:17:08.205661  284195 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-11e31399831a IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:62:85:d0:14:cb:57} reservation:<nil>}
	I1019 17:17:08.207035  284195 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0018972a0}
	I1019 17:17:08.207140  284195 network_create.go:124] attempt to create docker network kindnet-624324 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1019 17:17:08.207204  284195 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-624324 kindnet-624324
	I1019 17:17:08.273186  284195 network_create.go:108] docker network kindnet-624324 192.168.94.0/24 created
	I1019 17:17:08.273220  284195 kic.go:121] calculated static IP "192.168.94.2" for the "kindnet-624324" container
	I1019 17:17:08.273287  284195 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1019 17:17:08.293001  284195 cli_runner.go:164] Run: docker volume create kindnet-624324 --label name.minikube.sigs.k8s.io=kindnet-624324 --label created_by.minikube.sigs.k8s.io=true
	I1019 17:17:08.317157  284195 oci.go:103] Successfully created a docker volume kindnet-624324
	I1019 17:17:08.317263  284195 cli_runner.go:164] Run: docker run --rm --name kindnet-624324-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-624324 --entrypoint /usr/bin/test -v kindnet-624324:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1019 17:17:08.752902  284195 oci.go:107] Successfully prepared a docker volume kindnet-624324
	I1019 17:17:08.752939  284195 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:17:08.752966  284195 kic.go:194] Starting extracting preloaded images to volume ...
	I1019 17:17:08.753044  284195 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-3731/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-624324:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	W1019 17:17:08.468710  274481 pod_ready.go:104] pod "coredns-66bc5c9577-2r8tf" is not "Ready", error: <nil>
	W1019 17:17:10.967920  274481 pod_ready.go:104] pod "coredns-66bc5c9577-2r8tf" is not "Ready", error: <nil>
	I1019 17:17:13.285288  284195 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-3731/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-624324:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.53219807s)
	I1019 17:17:13.285328  284195 kic.go:203] duration metric: took 4.532358026s to extract preloaded images to volume ...
	W1019 17:17:13.285446  284195 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1019 17:17:13.285503  284195 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1019 17:17:13.285581  284195 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1019 17:17:13.352912  284195 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-624324 --name kindnet-624324 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-624324 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-624324 --network kindnet-624324 --ip 192.168.94.2 --volume kindnet-624324:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1019 17:17:13.673210  284195 cli_runner.go:164] Run: docker container inspect kindnet-624324 --format={{.State.Running}}
	I1019 17:17:13.694043  284195 cli_runner.go:164] Run: docker container inspect kindnet-624324 --format={{.State.Status}}
	I1019 17:17:13.715221  284195 cli_runner.go:164] Run: docker exec kindnet-624324 stat /var/lib/dpkg/alternatives/iptables
	I1019 17:17:13.769251  284195 oci.go:144] the created container "kindnet-624324" has a running status.
	I1019 17:17:13.769295  284195 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-3731/.minikube/machines/kindnet-624324/id_rsa...
	I1019 17:17:14.395511  284195 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-3731/.minikube/machines/kindnet-624324/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1019 17:17:14.428761  284195 cli_runner.go:164] Run: docker container inspect kindnet-624324 --format={{.State.Status}}
	I1019 17:17:14.452564  284195 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1019 17:17:14.452592  284195 kic_runner.go:114] Args: [docker exec --privileged kindnet-624324 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1019 17:17:14.506838  284195 cli_runner.go:164] Run: docker container inspect kindnet-624324 --format={{.State.Status}}
	I1019 17:17:14.528132  284195 machine.go:94] provisionDockerMachine start ...
	I1019 17:17:14.528220  284195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-624324
	I1019 17:17:14.554803  284195 main.go:143] libmachine: Using SSH client type: native
	I1019 17:17:14.555131  284195 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33109 <nil> <nil>}
	I1019 17:17:14.555150  284195 main.go:143] libmachine: About to run SSH command:
	hostname
	I1019 17:17:14.700828  284195 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-624324
	
	I1019 17:17:14.700860  284195 ubuntu.go:182] provisioning hostname "kindnet-624324"
	I1019 17:17:14.700930  284195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-624324
	I1019 17:17:14.722903  284195 main.go:143] libmachine: Using SSH client type: native
	I1019 17:17:14.723210  284195 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33109 <nil> <nil>}
	I1019 17:17:14.723247  284195 main.go:143] libmachine: About to run SSH command:
	sudo hostname kindnet-624324 && echo "kindnet-624324" | sudo tee /etc/hostname
	I1019 17:17:14.871535  284195 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-624324
	
	I1019 17:17:14.871620  284195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-624324
	I1019 17:17:14.894483  284195 main.go:143] libmachine: Using SSH client type: native
	I1019 17:17:14.894729  284195 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33109 <nil> <nil>}
	I1019 17:17:14.894785  284195 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-624324' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-624324/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-624324' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 17:17:15.034728  284195 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1019 17:17:15.034765  284195 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-3731/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-3731/.minikube}
	I1019 17:17:15.034806  284195 ubuntu.go:190] setting up certificates
	I1019 17:17:15.034822  284195 provision.go:84] configureAuth start
	I1019 17:17:15.034881  284195 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-624324
	I1019 17:17:15.053543  284195 provision.go:143] copyHostCerts
	I1019 17:17:15.053608  284195 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-3731/.minikube/ca.pem, removing ...
	I1019 17:17:15.053619  284195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-3731/.minikube/ca.pem
	I1019 17:17:15.053704  284195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-3731/.minikube/ca.pem (1082 bytes)
	I1019 17:17:15.053838  284195 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-3731/.minikube/cert.pem, removing ...
	I1019 17:17:15.053855  284195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-3731/.minikube/cert.pem
	I1019 17:17:15.053906  284195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-3731/.minikube/cert.pem (1123 bytes)
	I1019 17:17:15.053998  284195 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-3731/.minikube/key.pem, removing ...
	I1019 17:17:15.054008  284195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-3731/.minikube/key.pem
	I1019 17:17:15.054046  284195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-3731/.minikube/key.pem (1679 bytes)
	I1019 17:17:15.054139  284195 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-3731/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca-key.pem org=jenkins.kindnet-624324 san=[127.0.0.1 192.168.94.2 kindnet-624324 localhost minikube]
	I1019 17:17:15.700540  284195 provision.go:177] copyRemoteCerts
	I1019 17:17:15.700610  284195 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 17:17:15.700657  284195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-624324
	I1019 17:17:15.719415  284195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/kindnet-624324/id_rsa Username:docker}
	I1019 17:17:15.815615  284195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 17:17:15.835374  284195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1019 17:17:15.853394  284195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 17:17:15.870817  284195 provision.go:87] duration metric: took 835.979766ms to configureAuth
	I1019 17:17:15.870844  284195 ubuntu.go:206] setting minikube options for container-runtime
	I1019 17:17:15.870987  284195 config.go:182] Loaded profile config "kindnet-624324": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:17:15.871098  284195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-624324
	I1019 17:17:15.889170  284195 main.go:143] libmachine: Using SSH client type: native
	I1019 17:17:15.889386  284195 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33109 <nil> <nil>}
	I1019 17:17:15.889409  284195 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 17:17:16.132058  284195 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 17:17:16.132101  284195 machine.go:97] duration metric: took 1.6039452s to provisionDockerMachine
	I1019 17:17:16.132114  284195 client.go:174] duration metric: took 7.998864881s to LocalClient.Create
	I1019 17:17:16.132140  284195 start.go:167] duration metric: took 7.998941099s to libmachine.API.Create "kindnet-624324"
	I1019 17:17:16.132152  284195 start.go:293] postStartSetup for "kindnet-624324" (driver="docker")
	I1019 17:17:16.132164  284195 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 17:17:16.132222  284195 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 17:17:16.132276  284195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-624324
	I1019 17:17:16.153529  284195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/kindnet-624324/id_rsa Username:docker}
	I1019 17:17:16.260735  284195 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 17:17:16.265310  284195 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 17:17:16.265345  284195 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 17:17:16.265359  284195 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-3731/.minikube/addons for local assets ...
	I1019 17:17:16.265411  284195 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-3731/.minikube/files for local assets ...
	I1019 17:17:16.265500  284195 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-3731/.minikube/files/etc/ssl/certs/72282.pem -> 72282.pem in /etc/ssl/certs
	I1019 17:17:16.265628  284195 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 17:17:16.275522  284195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/files/etc/ssl/certs/72282.pem --> /etc/ssl/certs/72282.pem (1708 bytes)
	I1019 17:17:16.299434  284195 start.go:296] duration metric: took 167.267905ms for postStartSetup
	I1019 17:17:16.299836  284195 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-624324
	I1019 17:17:16.321458  284195 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/kindnet-624324/config.json ...
	I1019 17:17:16.321754  284195 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 17:17:16.321798  284195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-624324
	I1019 17:17:16.343896  284195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/kindnet-624324/id_rsa Username:docker}
	I1019 17:17:16.445114  284195 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 17:17:16.450592  284195 start.go:128] duration metric: took 8.32032478s to createHost
	I1019 17:17:16.450620  284195 start.go:83] releasing machines lock for "kindnet-624324", held for 8.320477229s
	I1019 17:17:16.450711  284195 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-624324
	I1019 17:17:16.473032  284195 ssh_runner.go:195] Run: cat /version.json
	I1019 17:17:16.473103  284195 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 17:17:16.473129  284195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-624324
	I1019 17:17:16.473171  284195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-624324
	I1019 17:17:16.495216  284195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/kindnet-624324/id_rsa Username:docker}
	I1019 17:17:16.495240  284195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/kindnet-624324/id_rsa Username:docker}
	I1019 17:17:16.592529  284195 ssh_runner.go:195] Run: systemctl --version
	I1019 17:17:16.663777  284195 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 17:17:16.707738  284195 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 17:17:16.713595  284195 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 17:17:16.713682  284195 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 17:17:16.740983  284195 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1019 17:17:16.741004  284195 start.go:496] detecting cgroup driver to use...
	I1019 17:17:16.741032  284195 detect.go:190] detected "systemd" cgroup driver on host os
	I1019 17:17:16.741081  284195 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 17:17:16.757791  284195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 17:17:16.772284  284195 docker.go:218] disabling cri-docker service (if available) ...
	I1019 17:17:16.772348  284195 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 17:17:16.791264  284195 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 17:17:16.809534  284195 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 17:17:16.890052  284195 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 17:17:16.983470  284195 docker.go:234] disabling docker service ...
	I1019 17:17:16.983529  284195 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 17:17:17.002324  284195 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 17:17:17.016190  284195 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 17:17:17.114861  284195 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 17:17:17.206441  284195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 17:17:17.222730  284195 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 17:17:17.240434  284195 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 17:17:17.240507  284195 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:17:17.252391  284195 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1019 17:17:17.252471  284195 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:17:17.262248  284195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:17:17.272613  284195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:17:17.281637  284195 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 17:17:17.290764  284195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:17:17.299872  284195 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:17:17.313746  284195 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:17:17.322780  284195 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 17:17:17.330348  284195 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 17:17:17.337978  284195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:17:17.420377  284195 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 17:17:17.552141  284195 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 17:17:17.552216  284195 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 17:17:17.556491  284195 start.go:564] Will wait 60s for crictl version
	I1019 17:17:17.556540  284195 ssh_runner.go:195] Run: which crictl
	I1019 17:17:17.560782  284195 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 17:17:17.588875  284195 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 17:17:17.588962  284195 ssh_runner.go:195] Run: crio --version
	I1019 17:17:17.622448  284195 ssh_runner.go:195] Run: crio --version
	I1019 17:17:17.658763  284195 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1019 17:17:17.660284  284195 cli_runner.go:164] Run: docker network inspect kindnet-624324 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 17:17:17.679634  284195 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1019 17:17:17.684471  284195 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:17:17.701944  284195 kubeadm.go:884] updating cluster {Name:kindnet-624324 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-624324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 17:17:17.702274  284195 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:17:17.702348  284195 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:17:17.743014  284195 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:17:17.743037  284195 crio.go:433] Images already preloaded, skipping extraction
	I1019 17:17:17.743118  284195 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:17:17.768109  284195 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:17:17.768133  284195 cache_images.go:86] Images are preloaded, skipping loading
	I1019 17:17:17.768142  284195 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1019 17:17:17.768247  284195 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kindnet-624324 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:kindnet-624324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I1019 17:17:17.768342  284195 ssh_runner.go:195] Run: crio config
	I1019 17:17:17.816135  284195 cni.go:84] Creating CNI manager for "kindnet"
	I1019 17:17:17.816177  284195 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 17:17:17.816205  284195 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-624324 NodeName:kindnet-624324 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 17:17:17.816392  284195 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-624324"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 17:17:17.816479  284195 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 17:17:17.827056  284195 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 17:17:17.827151  284195 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 17:17:17.837743  284195 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (364 bytes)
	I1019 17:17:17.854668  284195 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 17:17:17.871385  284195 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1019 17:17:17.884648  284195 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1019 17:17:17.888754  284195 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:17:17.898862  284195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	W1019 17:17:13.468212  274481 pod_ready.go:104] pod "coredns-66bc5c9577-2r8tf" is not "Ready", error: <nil>
	W1019 17:17:15.967294  274481 pod_ready.go:104] pod "coredns-66bc5c9577-2r8tf" is not "Ready", error: <nil>
	I1019 17:17:18.431867  279986 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1019 17:17:18.431949  279986 kubeadm.go:319] [preflight] Running pre-flight checks
	I1019 17:17:18.432118  279986 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1019 17:17:18.432226  279986 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1041-gcp
	I1019 17:17:18.432287  279986 kubeadm.go:319] OS: Linux
	I1019 17:17:18.432358  279986 kubeadm.go:319] CGROUPS_CPU: enabled
	I1019 17:17:18.432425  279986 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1019 17:17:18.432491  279986 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1019 17:17:18.432561  279986 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1019 17:17:18.432621  279986 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1019 17:17:18.432674  279986 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1019 17:17:18.432738  279986 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1019 17:17:18.432784  279986 kubeadm.go:319] CGROUPS_IO: enabled
	I1019 17:17:18.432863  279986 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1019 17:17:18.432997  279986 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1019 17:17:18.433168  279986 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1019 17:17:18.433267  279986 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1019 17:17:18.434877  279986 out.go:252]   - Generating certificates and keys ...
	I1019 17:17:18.434965  279986 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1019 17:17:18.435057  279986 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1019 17:17:18.435197  279986 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1019 17:17:18.435256  279986 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1019 17:17:18.435329  279986 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1019 17:17:18.435399  279986 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1019 17:17:18.435467  279986 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1019 17:17:18.435619  279986 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-624324 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1019 17:17:18.435724  279986 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1019 17:17:18.435932  279986 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-624324 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1019 17:17:18.436028  279986 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1019 17:17:18.436160  279986 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1019 17:17:18.436215  279986 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1019 17:17:18.436284  279986 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1019 17:17:18.436360  279986 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1019 17:17:18.436459  279986 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1019 17:17:18.436546  279986 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1019 17:17:18.436654  279986 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1019 17:17:18.436732  279986 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1019 17:17:18.436861  279986 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1019 17:17:18.436969  279986 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1019 17:17:18.438585  279986 out.go:252]   - Booting up control plane ...
	I1019 17:17:18.438712  279986 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1019 17:17:18.438813  279986 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1019 17:17:18.438905  279986 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1019 17:17:18.439043  279986 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1019 17:17:18.439217  279986 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1019 17:17:18.439350  279986 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1019 17:17:18.439481  279986 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1019 17:17:18.439531  279986 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1019 17:17:18.439647  279986 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1019 17:17:18.439774  279986 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1019 17:17:18.439839  279986 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00164339s
	I1019 17:17:18.439961  279986 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1019 17:17:18.440093  279986 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1019 17:17:18.440241  279986 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1019 17:17:18.440344  279986 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1019 17:17:18.440422  279986 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.765800179s
	I1019 17:17:18.440499  279986 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.695365008s
	I1019 17:17:18.440559  279986 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501482623s
	I1019 17:17:18.440659  279986 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1019 17:17:18.440776  279986 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1019 17:17:18.440834  279986 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1019 17:17:18.441109  279986 kubeadm.go:319] [mark-control-plane] Marking the node auto-624324 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1019 17:17:18.441168  279986 kubeadm.go:319] [bootstrap-token] Using token: jhs3kv.w1yqlyxcfw05u8f4
	I1019 17:17:18.443121  279986 out.go:252]   - Configuring RBAC rules ...
	I1019 17:17:18.443216  279986 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1019 17:17:18.443345  279986 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1019 17:17:18.443543  279986 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1019 17:17:18.443732  279986 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1019 17:17:18.443892  279986 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1019 17:17:18.444050  279986 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1019 17:17:18.444213  279986 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1019 17:17:18.444277  279986 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1019 17:17:18.444349  279986 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1019 17:17:18.444359  279986 kubeadm.go:319] 
	I1019 17:17:18.444455  279986 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1019 17:17:18.444474  279986 kubeadm.go:319] 
	I1019 17:17:18.444566  279986 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1019 17:17:18.444578  279986 kubeadm.go:319] 
	I1019 17:17:18.444619  279986 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1019 17:17:18.444710  279986 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1019 17:17:18.444781  279986 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1019 17:17:18.444791  279986 kubeadm.go:319] 
	I1019 17:17:18.444878  279986 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1019 17:17:18.444890  279986 kubeadm.go:319] 
	I1019 17:17:18.444931  279986 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1019 17:17:18.444940  279986 kubeadm.go:319] 
	I1019 17:17:18.444994  279986 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1019 17:17:18.445115  279986 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1019 17:17:18.445219  279986 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1019 17:17:18.445232  279986 kubeadm.go:319] 
	I1019 17:17:18.445330  279986 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1019 17:17:18.445444  279986 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1019 17:17:18.445453  279986 kubeadm.go:319] 
	I1019 17:17:18.445558  279986 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token jhs3kv.w1yqlyxcfw05u8f4 \
	I1019 17:17:18.445669  279986 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:861d72ec119aad1188a44160bac431ea1f9fef8d67df56e962f86d9bbf64f1d3 \
	I1019 17:17:18.445703  279986 kubeadm.go:319] 	--control-plane 
	I1019 17:17:18.445716  279986 kubeadm.go:319] 
	I1019 17:17:18.445822  279986 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1019 17:17:18.445830  279986 kubeadm.go:319] 
	I1019 17:17:18.445935  279986 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token jhs3kv.w1yqlyxcfw05u8f4 \
	I1019 17:17:18.446095  279986 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:861d72ec119aad1188a44160bac431ea1f9fef8d67df56e962f86d9bbf64f1d3 
	I1019 17:17:18.446117  279986 cni.go:84] Creating CNI manager for ""
	I1019 17:17:18.446129  279986 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:17:18.448323  279986 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1019 17:17:18.449497  279986 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1019 17:17:18.454124  279986 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1019 17:17:18.454142  279986 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1019 17:17:18.469699  279986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1019 17:17:18.695481  279986 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1019 17:17:18.695539  279986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:17:18.695590  279986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-624324 minikube.k8s.io/updated_at=2025_10_19T17_17_18_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34 minikube.k8s.io/name=auto-624324 minikube.k8s.io/primary=true
	I1019 17:17:18.779735  279986 ops.go:34] apiserver oom_adj: -16
	I1019 17:17:18.779864  279986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:17:19.280273  279986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:17:19.780304  279986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:17:20.280725  279986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:17:20.779948  279986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:17:21.280292  279986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:17:21.780267  279986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:17:22.280245  279986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:17:17.985900  284195 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:17:18.009520  284195 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/kindnet-624324 for IP: 192.168.94.2
	I1019 17:17:18.009555  284195 certs.go:195] generating shared ca certs ...
	I1019 17:17:18.009574  284195 certs.go:227] acquiring lock for ca certs: {Name:mk4c4a2dfd94a54e3626e99fce4b6f5183eeaf4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:17:18.009753  284195 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-3731/.minikube/ca.key
	I1019 17:17:18.009795  284195 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-3731/.minikube/proxy-client-ca.key
	I1019 17:17:18.009807  284195 certs.go:257] generating profile certs ...
	I1019 17:17:18.009886  284195 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/kindnet-624324/client.key
	I1019 17:17:18.009909  284195 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/kindnet-624324/client.crt with IP's: []
	I1019 17:17:18.047243  284195 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/kindnet-624324/client.crt ...
	I1019 17:17:18.047288  284195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/kindnet-624324/client.crt: {Name:mk4f83d3317146f3a69a91e9c2e25c772b5846a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:17:18.047502  284195 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/kindnet-624324/client.key ...
	I1019 17:17:18.047517  284195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/kindnet-624324/client.key: {Name:mk0bf4a6486efc5c7bdd97d8013fa135dde0f437 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:17:18.047626  284195 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/kindnet-624324/apiserver.key.f95fb5fa
	I1019 17:17:18.047648  284195 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/kindnet-624324/apiserver.crt.f95fb5fa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1019 17:17:18.543932  284195 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/kindnet-624324/apiserver.crt.f95fb5fa ...
	I1019 17:17:18.543960  284195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/kindnet-624324/apiserver.crt.f95fb5fa: {Name:mkd2e3c36a12da51079c4c18ca9f93419cb75824 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:17:18.544174  284195 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/kindnet-624324/apiserver.key.f95fb5fa ...
	I1019 17:17:18.544192  284195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/kindnet-624324/apiserver.key.f95fb5fa: {Name:mk2f7dd6807ac203a87fa35802ac891d2c701900 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:17:18.544298  284195 certs.go:382] copying /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/kindnet-624324/apiserver.crt.f95fb5fa -> /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/kindnet-624324/apiserver.crt
	I1019 17:17:18.544422  284195 certs.go:386] copying /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/kindnet-624324/apiserver.key.f95fb5fa -> /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/kindnet-624324/apiserver.key
	I1019 17:17:18.544521  284195 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/kindnet-624324/proxy-client.key
	I1019 17:17:18.544541  284195 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/kindnet-624324/proxy-client.crt with IP's: []
	I1019 17:17:18.596758  284195 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/kindnet-624324/proxy-client.crt ...
	I1019 17:17:18.596786  284195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/kindnet-624324/proxy-client.crt: {Name:mkba0841fda4775d4d5e70d6c80cb8080b9ba0e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:17:18.596992  284195 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/kindnet-624324/proxy-client.key ...
	I1019 17:17:18.597007  284195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/kindnet-624324/proxy-client.key: {Name:mk36459cd4bff07ed5dece50bbeffb7dc8fb0574 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:17:18.597247  284195 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/7228.pem (1338 bytes)
	W1019 17:17:18.597285  284195 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-3731/.minikube/certs/7228_empty.pem, impossibly tiny 0 bytes
	I1019 17:17:18.597296  284195 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca-key.pem (1675 bytes)
	I1019 17:17:18.597325  284195 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem (1082 bytes)
	I1019 17:17:18.597356  284195 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/cert.pem (1123 bytes)
	I1019 17:17:18.597386  284195 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/key.pem (1679 bytes)
	I1019 17:17:18.597424  284195 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3731/.minikube/files/etc/ssl/certs/72282.pem (1708 bytes)
	I1019 17:17:18.598215  284195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 17:17:18.617934  284195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1019 17:17:18.639287  284195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 17:17:18.658813  284195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1019 17:17:18.680699  284195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/kindnet-624324/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1019 17:17:18.703555  284195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/kindnet-624324/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1019 17:17:18.725613  284195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/kindnet-624324/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 17:17:18.752358  284195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/kindnet-624324/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1019 17:17:18.776748  284195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/certs/7228.pem --> /usr/share/ca-certificates/7228.pem (1338 bytes)
	I1019 17:17:18.801138  284195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/files/etc/ssl/certs/72282.pem --> /usr/share/ca-certificates/72282.pem (1708 bytes)
	I1019 17:17:18.821145  284195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 17:17:18.841542  284195 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 17:17:18.855339  284195 ssh_runner.go:195] Run: openssl version
	I1019 17:17:18.861675  284195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 17:17:18.870707  284195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:17:18.874706  284195 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 16:21 /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:17:18.874774  284195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:17:18.913860  284195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 17:17:18.923379  284195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7228.pem && ln -fs /usr/share/ca-certificates/7228.pem /etc/ssl/certs/7228.pem"
	I1019 17:17:18.932605  284195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7228.pem
	I1019 17:17:18.937171  284195 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 16:26 /usr/share/ca-certificates/7228.pem
	I1019 17:17:18.937226  284195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7228.pem
	I1019 17:17:18.974641  284195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7228.pem /etc/ssl/certs/51391683.0"
	I1019 17:17:18.983927  284195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/72282.pem && ln -fs /usr/share/ca-certificates/72282.pem /etc/ssl/certs/72282.pem"
	I1019 17:17:18.993000  284195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/72282.pem
	I1019 17:17:18.997287  284195 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 16:26 /usr/share/ca-certificates/72282.pem
	I1019 17:17:18.997347  284195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/72282.pem
	I1019 17:17:19.032130  284195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/72282.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 17:17:19.041512  284195 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 17:17:19.045776  284195 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1019 17:17:19.045832  284195 kubeadm.go:401] StartCluster: {Name:kindnet-624324 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-624324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:17:19.045917  284195 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 17:17:19.045989  284195 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 17:17:19.074264  284195 cri.go:89] found id: ""
	I1019 17:17:19.074338  284195 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 17:17:19.082812  284195 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1019 17:17:19.091229  284195 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1019 17:17:19.091288  284195 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1019 17:17:19.099227  284195 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1019 17:17:19.099246  284195 kubeadm.go:158] found existing configuration files:
	
	I1019 17:17:19.099295  284195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1019 17:17:19.107106  284195 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1019 17:17:19.107156  284195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1019 17:17:19.115165  284195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1019 17:17:19.123514  284195 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1019 17:17:19.123572  284195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1019 17:17:19.131204  284195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1019 17:17:19.139302  284195 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1019 17:17:19.139357  284195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1019 17:17:19.147017  284195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1019 17:17:19.154633  284195 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1019 17:17:19.154693  284195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1019 17:17:19.162194  284195 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1019 17:17:19.223749  284195 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1019 17:17:19.288213  284195 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1019 17:17:18.467381  274481 pod_ready.go:104] pod "coredns-66bc5c9577-2r8tf" is not "Ready", error: <nil>
	W1019 17:17:20.967535  274481 pod_ready.go:104] pod "coredns-66bc5c9577-2r8tf" is not "Ready", error: <nil>
	I1019 17:17:22.780211  279986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:17:23.280699  279986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:17:23.363828  279986 kubeadm.go:1114] duration metric: took 4.668354333s to wait for elevateKubeSystemPrivileges
	I1019 17:17:23.363867  279986 kubeadm.go:403] duration metric: took 15.439895499s to StartCluster
	I1019 17:17:23.363890  279986 settings.go:142] acquiring lock: {Name:mk205bd8d663b4a1850184d0f589700c0d2429b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:17:23.363962  279986 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-3731/kubeconfig
	I1019 17:17:23.368458  279986 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/kubeconfig: {Name:mk4cdfafbafeae33568865a60cf929c656d55c5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:17:23.368732  279986 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 17:17:23.369085  279986 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 17:17:23.369276  279986 config.go:182] Loaded profile config "auto-624324": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:17:23.369288  279986 addons.go:70] Setting default-storageclass=true in profile "auto-624324"
	I1019 17:17:23.369303  279986 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-624324"
	I1019 17:17:23.369276  279986 addons.go:70] Setting storage-provisioner=true in profile "auto-624324"
	I1019 17:17:23.369336  279986 addons.go:239] Setting addon storage-provisioner=true in "auto-624324"
	I1019 17:17:23.369368  279986 host.go:66] Checking if "auto-624324" exists ...
	I1019 17:17:23.369176  279986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1019 17:17:23.369680  279986 cli_runner.go:164] Run: docker container inspect auto-624324 --format={{.State.Status}}
	I1019 17:17:23.369899  279986 cli_runner.go:164] Run: docker container inspect auto-624324 --format={{.State.Status}}
	I1019 17:17:23.370346  279986 out.go:179] * Verifying Kubernetes components...
	I1019 17:17:23.371906  279986 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:17:23.398687  279986 addons.go:239] Setting addon default-storageclass=true in "auto-624324"
	I1019 17:17:23.398736  279986 host.go:66] Checking if "auto-624324" exists ...
	I1019 17:17:23.399445  279986 cli_runner.go:164] Run: docker container inspect auto-624324 --format={{.State.Status}}
	I1019 17:17:23.401243  279986 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 17:17:23.402557  279986 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:17:23.402577  279986 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 17:17:23.402646  279986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-624324
	I1019 17:17:23.434045  279986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/auto-624324/id_rsa Username:docker}
	I1019 17:17:23.440318  279986 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 17:17:23.440349  279986 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 17:17:23.440411  279986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-624324
	I1019 17:17:23.469701  279986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/auto-624324/id_rsa Username:docker}
	I1019 17:17:23.490322  279986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1019 17:17:23.565193  279986 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:17:23.565584  279986 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:17:23.594359  279986 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 17:17:23.714962  279986 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1019 17:17:23.944580  279986 node_ready.go:35] waiting up to 15m0s for node "auto-624324" to be "Ready" ...
	I1019 17:17:23.950897  279986 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	
	
	==> CRI-O <==
	Oct 19 17:16:46 embed-certs-090139 crio[564]: time="2025-10-19T17:16:46.269049443Z" level=info msg="Started container" PID=1742 containerID=9101226d2f073efa622c4868e85c8e8fd2db96dbf5f00cbc36ee69be2d09eaef description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pg7gk/dashboard-metrics-scraper id=5b62a28f-1b29-49cc-b887-38688f69cc5a name=/runtime.v1.RuntimeService/StartContainer sandboxID=3c3557327c37e444b32708db3fbfd9f8039c4dc4823ffcaca33c0f5a032a40f0
	Oct 19 17:16:47 embed-certs-090139 crio[564]: time="2025-10-19T17:16:47.230554127Z" level=info msg="Removing container: b217600ed8e25df530de92df958369f5d9a8afa646181e25fd5e596585b15954" id=a5a5eedd-f9f9-4f0e-8469-e8ae7ebc6040 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 17:16:47 embed-certs-090139 crio[564]: time="2025-10-19T17:16:47.247466658Z" level=info msg="Removed container b217600ed8e25df530de92df958369f5d9a8afa646181e25fd5e596585b15954: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pg7gk/dashboard-metrics-scraper" id=a5a5eedd-f9f9-4f0e-8469-e8ae7ebc6040 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 17:17:05 embed-certs-090139 crio[564]: time="2025-10-19T17:17:05.292026395Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=4de068c8-11ad-4347-8d63-ec85c1efc851 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:17:05 embed-certs-090139 crio[564]: time="2025-10-19T17:17:05.293029696Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=8e376e10-4c45-4c56-a7bd-a07c0ef333bd name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:17:05 embed-certs-090139 crio[564]: time="2025-10-19T17:17:05.294166635Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=3841b89b-806b-46e0-acd1-f7f64bc158f5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:17:05 embed-certs-090139 crio[564]: time="2025-10-19T17:17:05.294447095Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:17:05 embed-certs-090139 crio[564]: time="2025-10-19T17:17:05.299427147Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:17:05 embed-certs-090139 crio[564]: time="2025-10-19T17:17:05.299627339Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/1013ccf33efef85063a43b13879c29f417d9c16b61927574b4b50a2cd13c1122/merged/etc/passwd: no such file or directory"
	Oct 19 17:17:05 embed-certs-090139 crio[564]: time="2025-10-19T17:17:05.299667316Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/1013ccf33efef85063a43b13879c29f417d9c16b61927574b4b50a2cd13c1122/merged/etc/group: no such file or directory"
	Oct 19 17:17:05 embed-certs-090139 crio[564]: time="2025-10-19T17:17:05.3000104Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:17:05 embed-certs-090139 crio[564]: time="2025-10-19T17:17:05.328803028Z" level=info msg="Created container db2380c01b5a9e27881495bdbbb23cd4d9a4f1a24834f3b8b8bfeec346b8dcae: kube-system/storage-provisioner/storage-provisioner" id=3841b89b-806b-46e0-acd1-f7f64bc158f5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:17:05 embed-certs-090139 crio[564]: time="2025-10-19T17:17:05.32953414Z" level=info msg="Starting container: db2380c01b5a9e27881495bdbbb23cd4d9a4f1a24834f3b8b8bfeec346b8dcae" id=16a2cc02-2ff8-4718-8932-83053dbd6d95 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 17:17:05 embed-certs-090139 crio[564]: time="2025-10-19T17:17:05.332214241Z" level=info msg="Started container" PID=1756 containerID=db2380c01b5a9e27881495bdbbb23cd4d9a4f1a24834f3b8b8bfeec346b8dcae description=kube-system/storage-provisioner/storage-provisioner id=16a2cc02-2ff8-4718-8932-83053dbd6d95 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b14a17ab588b166a6522b565dfccd8b0c1aff548224a7a12a4f47f1a10327325
	Oct 19 17:17:08 embed-certs-090139 crio[564]: time="2025-10-19T17:17:08.138039045Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=62e7ecb2-bd97-411a-a76f-fb59e75d7729 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:17:08 embed-certs-090139 crio[564]: time="2025-10-19T17:17:08.139204802Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=39ef34c1-e5d7-4314-be3a-29c5e406cd2f name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:17:08 embed-certs-090139 crio[564]: time="2025-10-19T17:17:08.140407735Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pg7gk/dashboard-metrics-scraper" id=f1781bd7-bd32-489c-b5c5-cc242e8d5e21 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:17:08 embed-certs-090139 crio[564]: time="2025-10-19T17:17:08.140690914Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:17:08 embed-certs-090139 crio[564]: time="2025-10-19T17:17:08.146802727Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:17:08 embed-certs-090139 crio[564]: time="2025-10-19T17:17:08.147529561Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:17:08 embed-certs-090139 crio[564]: time="2025-10-19T17:17:08.177053656Z" level=info msg="Created container c41b3f083e0df7059c689e87000a2d83bb8eceaf028e0d55e1936c91a7f332f5: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pg7gk/dashboard-metrics-scraper" id=f1781bd7-bd32-489c-b5c5-cc242e8d5e21 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:17:08 embed-certs-090139 crio[564]: time="2025-10-19T17:17:08.177746538Z" level=info msg="Starting container: c41b3f083e0df7059c689e87000a2d83bb8eceaf028e0d55e1936c91a7f332f5" id=40dbbb07-4555-4fed-af6d-7711ef31f5a7 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 17:17:08 embed-certs-090139 crio[564]: time="2025-10-19T17:17:08.179861329Z" level=info msg="Started container" PID=1770 containerID=c41b3f083e0df7059c689e87000a2d83bb8eceaf028e0d55e1936c91a7f332f5 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pg7gk/dashboard-metrics-scraper id=40dbbb07-4555-4fed-af6d-7711ef31f5a7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3c3557327c37e444b32708db3fbfd9f8039c4dc4823ffcaca33c0f5a032a40f0
	Oct 19 17:17:08 embed-certs-090139 crio[564]: time="2025-10-19T17:17:08.30581667Z" level=info msg="Removing container: 9101226d2f073efa622c4868e85c8e8fd2db96dbf5f00cbc36ee69be2d09eaef" id=f7c94bc0-e118-41a6-8138-a4a9c966cd35 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 17:17:08 embed-certs-090139 crio[564]: time="2025-10-19T17:17:08.317230091Z" level=info msg="Removed container 9101226d2f073efa622c4868e85c8e8fd2db96dbf5f00cbc36ee69be2d09eaef: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pg7gk/dashboard-metrics-scraper" id=f7c94bc0-e118-41a6-8138-a4a9c966cd35 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	c41b3f083e0df       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           17 seconds ago      Exited              dashboard-metrics-scraper   2                   3c3557327c37e       dashboard-metrics-scraper-6ffb444bf9-pg7gk   kubernetes-dashboard
	db2380c01b5a9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           20 seconds ago      Running             storage-provisioner         1                   b14a17ab588b1       storage-provisioner                          kube-system
	39be18dfee0d9       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   43 seconds ago      Running             kubernetes-dashboard        0                   c70e8512092be       kubernetes-dashboard-855c9754f9-9d29n        kubernetes-dashboard
	8120323c27ea3       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           51 seconds ago      Running             busybox                     1                   0096a6550d887       busybox                                      default
	f28bfcad6c405       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           51 seconds ago      Running             coredns                     0                   d9b65e18ffea1       coredns-66bc5c9577-zw7d8                     kube-system
	032a52a687225       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           51 seconds ago      Running             kube-proxy                  0                   463821b71204f       kube-proxy-8f4lh                             kube-system
	2019570c30b89       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           51 seconds ago      Exited              storage-provisioner         0                   b14a17ab588b1       storage-provisioner                          kube-system
	0a03ae2cd978a       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           51 seconds ago      Running             kindnet-cni                 0                   40707355c2255       kindnet-dwsh7                                kube-system
	3c6fd3249cca2       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           54 seconds ago      Running             etcd                        0                   103b1f9af531a       etcd-embed-certs-090139                      kube-system
	8c97264fa8b22       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           54 seconds ago      Running             kube-apiserver              0                   b6f7c9e1a1deb       kube-apiserver-embed-certs-090139            kube-system
	7269af7f81934       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           54 seconds ago      Running             kube-scheduler              0                   021a7bc6fdc16       kube-scheduler-embed-certs-090139            kube-system
	d957ab9f5db99       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           54 seconds ago      Running             kube-controller-manager     0                   43c8d7de49ce8       kube-controller-manager-embed-certs-090139   kube-system
	
	
	==> coredns [f28bfcad6c405761f300339ad1d2a3ab9ac98c74395fd2d648954d7a5021f311] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38332 - 43849 "HINFO IN 5198447218566340739.7056095054614731733. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.072693336s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-090139
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-090139
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34
	                    minikube.k8s.io/name=embed-certs-090139
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T17_15_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 17:15:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-090139
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 17:17:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 17:17:04 +0000   Sun, 19 Oct 2025 17:15:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 17:17:04 +0000   Sun, 19 Oct 2025 17:15:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 17:17:04 +0000   Sun, 19 Oct 2025 17:15:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 17:17:04 +0000   Sun, 19 Oct 2025 17:15:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-090139
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                308b3de9-570c-4288-a8e0-c3790dfe5ce4
	  Boot ID:                    74c6737e-3c93-45ee-b810-4373d2d70b9b
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-66bc5c9577-zw7d8                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     105s
	  kube-system                 etcd-embed-certs-090139                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         111s
	  kube-system                 kindnet-dwsh7                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      105s
	  kube-system                 kube-apiserver-embed-certs-090139             250m (3%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-controller-manager-embed-certs-090139    200m (2%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-proxy-8f4lh                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-embed-certs-090139             100m (1%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-pg7gk    0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-9d29n         0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 104s                 kube-proxy       
	  Normal  Starting                 51s                  kube-proxy       
	  Normal  Starting                 116s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  116s (x8 over 116s)  kubelet          Node embed-certs-090139 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    116s (x8 over 116s)  kubelet          Node embed-certs-090139 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     116s (x8 over 116s)  kubelet          Node embed-certs-090139 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    111s                 kubelet          Node embed-certs-090139 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  111s                 kubelet          Node embed-certs-090139 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     111s                 kubelet          Node embed-certs-090139 status is now: NodeHasSufficientPID
	  Normal  Starting                 111s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           106s                 node-controller  Node embed-certs-090139 event: Registered Node embed-certs-090139 in Controller
	  Normal  NodeReady                94s                  kubelet          Node embed-certs-090139 status is now: NodeReady
	  Normal  Starting                 55s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  55s (x8 over 55s)    kubelet          Node embed-certs-090139 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    55s (x8 over 55s)    kubelet          Node embed-certs-090139 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     55s (x8 over 55s)    kubelet          Node embed-certs-090139 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           50s                  node-controller  Node embed-certs-090139 event: Registered Node embed-certs-090139 in Controller
	
	
	==> dmesg <==
	[  +0.102617] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027869] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.458770] kauditd_printk_skb: 47 callbacks suppressed
	[Oct19 16:23] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.054620] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023908] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023878] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023848] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023943] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +2.047764] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +4.031517] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +8.127172] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[ +16.382276] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[Oct19 16:24] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	
	
	==> etcd [3c6fd3249cca231ede96171d1c7342f490e2c1970dd6df69631ba08bbac70dda] <==
	{"level":"warn","ts":"2025-10-19T17:16:32.811785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:32.818857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:32.825457Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:32.831846Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:32.841502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:32.847969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:32.854840Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:32.862220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:32.869170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:32.876527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:32.886264Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:32.892095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:32.899340Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:32.905898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:32.912469Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:32.918732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:32.926511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:32.932881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:32.940148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:32.955873Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:32.967250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:33.019042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48188","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-19T17:16:43.366277Z","caller":"traceutil/trace.go:172","msg":"trace[279885890] transaction","detail":"{read_only:false; response_revision:608; number_of_response:1; }","duration":"103.025414ms","start":"2025-10-19T17:16:43.263231Z","end":"2025-10-19T17:16:43.366256Z","steps":["trace[279885890] 'process raft request'  (duration: 78.883359ms)","trace[279885890] 'compare'  (duration: 23.909272ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-19T17:16:43.374840Z","caller":"traceutil/trace.go:172","msg":"trace[50804672] transaction","detail":"{read_only:false; response_revision:610; number_of_response:1; }","duration":"108.667413ms","start":"2025-10-19T17:16:43.266151Z","end":"2025-10-19T17:16:43.374818Z","steps":["trace[50804672] 'process raft request'  (duration: 108.599902ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T17:16:43.374875Z","caller":"traceutil/trace.go:172","msg":"trace[1630067863] transaction","detail":"{read_only:false; response_revision:609; number_of_response:1; }","duration":"109.091739ms","start":"2025-10-19T17:16:43.265760Z","end":"2025-10-19T17:16:43.374851Z","steps":["trace[1630067863] 'process raft request'  (duration: 108.884222ms)"],"step_count":1}
	
	
	==> kernel <==
	 17:17:26 up 59 min,  0 user,  load average: 4.47, 3.38, 2.05
	Linux embed-certs-090139 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0a03ae2cd978a67ae2325f57237113942f56a65c39a49b00b59543933475e052] <==
	I1019 17:16:34.661339       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 17:16:34.661577       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1019 17:16:34.661746       1 main.go:148] setting mtu 1500 for CNI 
	I1019 17:16:34.661766       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 17:16:34.661791       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T17:16:34Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 17:16:34.900476       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 17:16:34.900537       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 17:16:34.900554       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 17:16:34.998565       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1019 17:16:35.397784       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 17:16:35.397822       1 metrics.go:72] Registering metrics
	I1019 17:16:35.397892       1 controller.go:711] "Syncing nftables rules"
	I1019 17:16:44.900288       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1019 17:16:44.900351       1 main.go:301] handling current node
	I1019 17:16:54.903526       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1019 17:16:54.903572       1 main.go:301] handling current node
	I1019 17:17:04.900444       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1019 17:17:04.900514       1 main.go:301] handling current node
	I1019 17:17:14.905162       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1019 17:17:14.905205       1 main.go:301] handling current node
	I1019 17:17:24.909151       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1019 17:17:24.909193       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8c97264fa8b225a884756f7a5ec2d9e5e99aa8adb8765570ed3a783b339f1d85] <==
	I1019 17:16:33.523971       1 aggregator.go:171] initial CRD sync complete...
	I1019 17:16:33.523981       1 autoregister_controller.go:144] Starting autoregister controller
	I1019 17:16:33.523987       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1019 17:16:33.523995       1 cache.go:39] Caches are synced for autoregister controller
	I1019 17:16:33.523521       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1019 17:16:33.523184       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1019 17:16:33.523512       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1019 17:16:33.523532       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1019 17:16:33.523648       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1019 17:16:33.533762       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1019 17:16:33.535714       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1019 17:16:33.541859       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1019 17:16:33.543149       1 policy_source.go:240] refreshing policies
	I1019 17:16:33.568232       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 17:16:33.841759       1 controller.go:667] quota admission added evaluator for: namespaces
	I1019 17:16:33.872646       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1019 17:16:33.896680       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 17:16:33.903967       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 17:16:33.911430       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 17:16:33.950991       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.235.194"}
	I1019 17:16:33.961891       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.14.128"}
	I1019 17:16:34.426401       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 17:16:37.224433       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1019 17:16:37.422601       1 controller.go:667] quota admission added evaluator for: endpoints
	I1019 17:16:37.470365       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [d957ab9f5db999a8e3d596f5eb09406aefbc41ab698ebeda9c2f79b429ea08a0] <==
	I1019 17:16:36.840157       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1019 17:16:36.840163       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1019 17:16:36.841214       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1019 17:16:36.843465       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1019 17:16:36.867010       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1019 17:16:36.867034       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1019 17:16:36.867058       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1019 17:16:36.867102       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1019 17:16:36.867215       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 17:16:36.867002       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1019 17:16:36.867230       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1019 17:16:36.867239       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1019 17:16:36.867076       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1019 17:16:36.867457       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1019 17:16:36.867547       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1019 17:16:36.868464       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1019 17:16:36.868504       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1019 17:16:36.868510       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1019 17:16:36.868518       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1019 17:16:36.868522       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1019 17:16:36.872118       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 17:16:36.872916       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1019 17:16:36.876421       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1019 17:16:36.893131       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 17:16:36.898375       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [032a52a6872256e9477b486431f9879e94f744c0af17fc0c51bc366d518fd888] <==
	I1019 17:16:34.527893       1 server_linux.go:53] "Using iptables proxy"
	I1019 17:16:34.586140       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 17:16:34.686942       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 17:16:34.686979       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1019 17:16:34.687083       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 17:16:34.705183       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 17:16:34.705230       1 server_linux.go:132] "Using iptables Proxier"
	I1019 17:16:34.710084       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 17:16:34.710442       1 server.go:527] "Version info" version="v1.34.1"
	I1019 17:16:34.710471       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:16:34.712636       1 config.go:200] "Starting service config controller"
	I1019 17:16:34.712722       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 17:16:34.712770       1 config.go:309] "Starting node config controller"
	I1019 17:16:34.712785       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 17:16:34.712801       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 17:16:34.712869       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 17:16:34.712881       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 17:16:34.712908       1 config.go:106] "Starting endpoint slice config controller"
	I1019 17:16:34.712918       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 17:16:34.813764       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1019 17:16:34.813788       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 17:16:34.813844       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [7269af7f81934d889a105bbbc2b1ebea2710e7a60bf8ecc35fb25c89f259a974] <==
	I1019 17:16:31.859412       1 serving.go:386] Generated self-signed cert in-memory
	I1019 17:16:33.533170       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1019 17:16:33.533205       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:16:33.539845       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1019 17:16:33.539894       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1019 17:16:33.539910       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1019 17:16:33.539921       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1019 17:16:33.540379       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1019 17:16:33.540409       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1019 17:16:33.541398       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 17:16:33.541493       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 17:16:33.641110       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1019 17:16:33.641255       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1019 17:16:33.641611       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 19 17:16:37 embed-certs-090139 kubelet[721]: E1019 17:16:37.403847     721 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-090139\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'embed-certs-090139' and this object" logger="UnhandledError" reflector="object-\"kubernetes-dashboard\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Oct 19 17:16:37 embed-certs-090139 kubelet[721]: I1019 17:16:37.485131     721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8667f19d-4c29-4376-8168-ba8ac48bde56-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-9d29n\" (UID: \"8667f19d-4c29-4376-8168-ba8ac48bde56\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9d29n"
	Oct 19 17:16:37 embed-certs-090139 kubelet[721]: I1019 17:16:37.485194     721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8btf6\" (UniqueName: \"kubernetes.io/projected/8667f19d-4c29-4376-8168-ba8ac48bde56-kube-api-access-8btf6\") pod \"kubernetes-dashboard-855c9754f9-9d29n\" (UID: \"8667f19d-4c29-4376-8168-ba8ac48bde56\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9d29n"
	Oct 19 17:16:37 embed-certs-090139 kubelet[721]: I1019 17:16:37.485293     721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/bf8c9703-2a15-44c7-860b-d6bb1f5f2cd5-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-pg7gk\" (UID: \"bf8c9703-2a15-44c7-860b-d6bb1f5f2cd5\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pg7gk"
	Oct 19 17:16:37 embed-certs-090139 kubelet[721]: I1019 17:16:37.485346     721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmxs2\" (UniqueName: \"kubernetes.io/projected/bf8c9703-2a15-44c7-860b-d6bb1f5f2cd5-kube-api-access-vmxs2\") pod \"dashboard-metrics-scraper-6ffb444bf9-pg7gk\" (UID: \"bf8c9703-2a15-44c7-860b-d6bb1f5f2cd5\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pg7gk"
	Oct 19 17:16:45 embed-certs-090139 kubelet[721]: I1019 17:16:45.668760     721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9d29n" podStartSLOduration=4.395073606 podStartE2EDuration="8.668734463s" podCreationTimestamp="2025-10-19 17:16:37 +0000 UTC" firstStartedPulling="2025-10-19 17:16:38.331489758 +0000 UTC m=+7.303151252" lastFinishedPulling="2025-10-19 17:16:42.605150618 +0000 UTC m=+11.576812109" observedRunningTime="2025-10-19 17:16:43.261373569 +0000 UTC m=+12.233035075" watchObservedRunningTime="2025-10-19 17:16:45.668734463 +0000 UTC m=+14.640395967"
	Oct 19 17:16:46 embed-certs-090139 kubelet[721]: I1019 17:16:46.222044     721 scope.go:117] "RemoveContainer" containerID="b217600ed8e25df530de92df958369f5d9a8afa646181e25fd5e596585b15954"
	Oct 19 17:16:47 embed-certs-090139 kubelet[721]: I1019 17:16:47.228211     721 scope.go:117] "RemoveContainer" containerID="b217600ed8e25df530de92df958369f5d9a8afa646181e25fd5e596585b15954"
	Oct 19 17:16:47 embed-certs-090139 kubelet[721]: I1019 17:16:47.228613     721 scope.go:117] "RemoveContainer" containerID="9101226d2f073efa622c4868e85c8e8fd2db96dbf5f00cbc36ee69be2d09eaef"
	Oct 19 17:16:47 embed-certs-090139 kubelet[721]: E1019 17:16:47.228838     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pg7gk_kubernetes-dashboard(bf8c9703-2a15-44c7-860b-d6bb1f5f2cd5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pg7gk" podUID="bf8c9703-2a15-44c7-860b-d6bb1f5f2cd5"
	Oct 19 17:16:48 embed-certs-090139 kubelet[721]: I1019 17:16:48.238212     721 scope.go:117] "RemoveContainer" containerID="9101226d2f073efa622c4868e85c8e8fd2db96dbf5f00cbc36ee69be2d09eaef"
	Oct 19 17:16:48 embed-certs-090139 kubelet[721]: E1019 17:16:48.238460     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pg7gk_kubernetes-dashboard(bf8c9703-2a15-44c7-860b-d6bb1f5f2cd5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pg7gk" podUID="bf8c9703-2a15-44c7-860b-d6bb1f5f2cd5"
	Oct 19 17:16:54 embed-certs-090139 kubelet[721]: I1019 17:16:54.897244     721 scope.go:117] "RemoveContainer" containerID="9101226d2f073efa622c4868e85c8e8fd2db96dbf5f00cbc36ee69be2d09eaef"
	Oct 19 17:16:54 embed-certs-090139 kubelet[721]: E1019 17:16:54.897507     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pg7gk_kubernetes-dashboard(bf8c9703-2a15-44c7-860b-d6bb1f5f2cd5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pg7gk" podUID="bf8c9703-2a15-44c7-860b-d6bb1f5f2cd5"
	Oct 19 17:17:05 embed-certs-090139 kubelet[721]: I1019 17:17:05.291502     721 scope.go:117] "RemoveContainer" containerID="2019570c30b89ab8c351e4d64d6ddd8cc33437e4b912376c44b0d230f8bce722"
	Oct 19 17:17:08 embed-certs-090139 kubelet[721]: I1019 17:17:08.137392     721 scope.go:117] "RemoveContainer" containerID="9101226d2f073efa622c4868e85c8e8fd2db96dbf5f00cbc36ee69be2d09eaef"
	Oct 19 17:17:08 embed-certs-090139 kubelet[721]: I1019 17:17:08.304429     721 scope.go:117] "RemoveContainer" containerID="9101226d2f073efa622c4868e85c8e8fd2db96dbf5f00cbc36ee69be2d09eaef"
	Oct 19 17:17:08 embed-certs-090139 kubelet[721]: I1019 17:17:08.304670     721 scope.go:117] "RemoveContainer" containerID="c41b3f083e0df7059c689e87000a2d83bb8eceaf028e0d55e1936c91a7f332f5"
	Oct 19 17:17:08 embed-certs-090139 kubelet[721]: E1019 17:17:08.304886     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pg7gk_kubernetes-dashboard(bf8c9703-2a15-44c7-860b-d6bb1f5f2cd5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pg7gk" podUID="bf8c9703-2a15-44c7-860b-d6bb1f5f2cd5"
	Oct 19 17:17:14 embed-certs-090139 kubelet[721]: I1019 17:17:14.897015     721 scope.go:117] "RemoveContainer" containerID="c41b3f083e0df7059c689e87000a2d83bb8eceaf028e0d55e1936c91a7f332f5"
	Oct 19 17:17:14 embed-certs-090139 kubelet[721]: E1019 17:17:14.897251     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pg7gk_kubernetes-dashboard(bf8c9703-2a15-44c7-860b-d6bb1f5f2cd5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pg7gk" podUID="bf8c9703-2a15-44c7-860b-d6bb1f5f2cd5"
	Oct 19 17:17:20 embed-certs-090139 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 19 17:17:20 embed-certs-090139 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 19 17:17:20 embed-certs-090139 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 19 17:17:20 embed-certs-090139 systemd[1]: kubelet.service: Consumed 1.705s CPU time.
	
	
	==> kubernetes-dashboard [39be18dfee0d94b64b58273530929a496bf5ad72be01310a470fdbb249d21670] <==
	2025/10/19 17:16:42 Using namespace: kubernetes-dashboard
	2025/10/19 17:16:42 Using in-cluster config to connect to apiserver
	2025/10/19 17:16:42 Using secret token for csrf signing
	2025/10/19 17:16:42 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/19 17:16:42 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/19 17:16:42 Successful initial request to the apiserver, version: v1.34.1
	2025/10/19 17:16:42 Generating JWE encryption key
	2025/10/19 17:16:42 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/19 17:16:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/19 17:16:42 Initializing JWE encryption key from synchronized object
	2025/10/19 17:16:42 Creating in-cluster Sidecar client
	2025/10/19 17:16:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/19 17:16:42 Serving insecurely on HTTP port: 9090
	2025/10/19 17:17:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/19 17:16:42 Starting overwatch
	
	
	==> storage-provisioner [2019570c30b89ab8c351e4d64d6ddd8cc33437e4b912376c44b0d230f8bce722] <==
	I1019 17:16:34.488223       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1019 17:17:04.490483       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [db2380c01b5a9e27881495bdbbb23cd4d9a4f1a24834f3b8b8bfeec346b8dcae] <==
	I1019 17:17:05.351830       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1019 17:17:05.363016       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1019 17:17:05.363047       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1019 17:17:05.366281       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:17:08.822244       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:17:13.083038       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:17:16.681723       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:17:19.735450       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:17:22.757961       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:17:22.763754       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 17:17:22.763919       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1019 17:17:22.764029       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"abf9a435-53d4-45a2-bf52-58f629c09914", APIVersion:"v1", ResourceVersion:"664", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-090139_f31ec667-7599-40ec-ba94-bfbd0834bc1c became leader
	I1019 17:17:22.764211       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-090139_f31ec667-7599-40ec-ba94-bfbd0834bc1c!
	W1019 17:17:22.766630       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:17:22.778338       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 17:17:22.864819       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-090139_f31ec667-7599-40ec-ba94-bfbd0834bc1c!
	W1019 17:17:24.781706       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:17:24.786145       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-090139 -n embed-certs-090139
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-090139 -n embed-certs-090139: exit status 2 (402.392494ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-090139 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (6.89s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-663015 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-663015 --alsologtostderr -v=1: exit status 80 (1.816538589s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-663015 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 17:17:39.216143  291722 out.go:360] Setting OutFile to fd 1 ...
	I1019 17:17:39.216450  291722 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:17:39.216462  291722 out.go:374] Setting ErrFile to fd 2...
	I1019 17:17:39.216469  291722 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:17:39.216755  291722 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3731/.minikube/bin
	I1019 17:17:39.217107  291722 out.go:368] Setting JSON to false
	I1019 17:17:39.217159  291722 mustload.go:66] Loading cluster: default-k8s-diff-port-663015
	I1019 17:17:39.217647  291722 config.go:182] Loaded profile config "default-k8s-diff-port-663015": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:17:39.218226  291722 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-663015 --format={{.State.Status}}
	I1019 17:17:39.240784  291722 host.go:66] Checking if "default-k8s-diff-port-663015" exists ...
	I1019 17:17:39.241166  291722 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:17:39.310823  291722 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:86 SystemTime:2025-10-19 17:17:39.298279429 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 17:17:39.311731  291722 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-663015 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1019 17:17:39.313969  291722 out.go:179] * Pausing node default-k8s-diff-port-663015 ... 
	I1019 17:17:39.315174  291722 host.go:66] Checking if "default-k8s-diff-port-663015" exists ...
	I1019 17:17:39.315552  291722 ssh_runner.go:195] Run: systemctl --version
	I1019 17:17:39.315599  291722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-663015
	I1019 17:17:39.338021  291722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/default-k8s-diff-port-663015/id_rsa Username:docker}
	I1019 17:17:39.444259  291722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:17:39.464850  291722 pause.go:52] kubelet running: true
	I1019 17:17:39.464939  291722 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 17:17:39.658339  291722 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 17:17:39.658427  291722 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 17:17:39.753153  291722 cri.go:89] found id: "4bc7eb843c66297e1cde0c1c9ec4523bf5b08e853c04e2abf91e040c7011df9d"
	I1019 17:17:39.753181  291722 cri.go:89] found id: "5549fb115a4d8128a67647e53adefbd5f1396f4eca49ed1c46a0a85127887340"
	I1019 17:17:39.753187  291722 cri.go:89] found id: "90c70b291b7ddbd6ff065c1772c5f6c1c6e80cc77afb11310f43bb3d05243b25"
	I1019 17:17:39.753193  291722 cri.go:89] found id: "cf5b36f1b400873a6f64ccde1cbf959c0adf80a4cbee8a27050d0adb93e938aa"
	I1019 17:17:39.753198  291722 cri.go:89] found id: "15343a83908f85231015b0d8768253b6b0aae7ec917d83ac88ef6e5b58711ebc"
	I1019 17:17:39.753203  291722 cri.go:89] found id: "6f5702f98db02fecf8ffffae08c89809549267cd099ea38ec1f43f04d2849238"
	I1019 17:17:39.753207  291722 cri.go:89] found id: "0198767b0edb6f90348a6cb47c20f3c0c5d712ddfcdc06a79eb89a2396dc856b"
	I1019 17:17:39.753212  291722 cri.go:89] found id: "98c96714927741271a866cf42303c32a2f1bcbff5d4fcfbf3eb2a3e8d6e376c1"
	I1019 17:17:39.753216  291722 cri.go:89] found id: "79c3046dfcac29d78ffef04f805bf4024716c53ca40c15dca8f18dfd42988854"
	I1019 17:17:39.753224  291722 cri.go:89] found id: "90266bf26f9f357afcc2eaa3c72132271f6bad2d3b47118e66f773e0407d9502"
	I1019 17:17:39.753236  291722 cri.go:89] found id: "955e744fa1a4514fad91045a63abd0edc7a8e64dcf7069fcb10271b34fac88fe"
	I1019 17:17:39.753240  291722 cri.go:89] found id: ""
	I1019 17:17:39.753286  291722 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 17:17:39.768539  291722 retry.go:31] will retry after 233.491519ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:17:39Z" level=error msg="open /run/runc: no such file or directory"
	I1019 17:17:40.003028  291722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:17:40.016953  291722 pause.go:52] kubelet running: false
	I1019 17:17:40.017007  291722 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 17:17:40.178277  291722 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 17:17:40.178370  291722 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 17:17:40.250388  291722 cri.go:89] found id: "4bc7eb843c66297e1cde0c1c9ec4523bf5b08e853c04e2abf91e040c7011df9d"
	I1019 17:17:40.250408  291722 cri.go:89] found id: "5549fb115a4d8128a67647e53adefbd5f1396f4eca49ed1c46a0a85127887340"
	I1019 17:17:40.250411  291722 cri.go:89] found id: "90c70b291b7ddbd6ff065c1772c5f6c1c6e80cc77afb11310f43bb3d05243b25"
	I1019 17:17:40.250414  291722 cri.go:89] found id: "cf5b36f1b400873a6f64ccde1cbf959c0adf80a4cbee8a27050d0adb93e938aa"
	I1019 17:17:40.250417  291722 cri.go:89] found id: "15343a83908f85231015b0d8768253b6b0aae7ec917d83ac88ef6e5b58711ebc"
	I1019 17:17:40.250420  291722 cri.go:89] found id: "6f5702f98db02fecf8ffffae08c89809549267cd099ea38ec1f43f04d2849238"
	I1019 17:17:40.250423  291722 cri.go:89] found id: "0198767b0edb6f90348a6cb47c20f3c0c5d712ddfcdc06a79eb89a2396dc856b"
	I1019 17:17:40.250425  291722 cri.go:89] found id: "98c96714927741271a866cf42303c32a2f1bcbff5d4fcfbf3eb2a3e8d6e376c1"
	I1019 17:17:40.250428  291722 cri.go:89] found id: "79c3046dfcac29d78ffef04f805bf4024716c53ca40c15dca8f18dfd42988854"
	I1019 17:17:40.250438  291722 cri.go:89] found id: "90266bf26f9f357afcc2eaa3c72132271f6bad2d3b47118e66f773e0407d9502"
	I1019 17:17:40.250441  291722 cri.go:89] found id: "955e744fa1a4514fad91045a63abd0edc7a8e64dcf7069fcb10271b34fac88fe"
	I1019 17:17:40.250443  291722 cri.go:89] found id: ""
	I1019 17:17:40.250480  291722 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 17:17:40.262541  291722 retry.go:31] will retry after 450.195061ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:17:40Z" level=error msg="open /run/runc: no such file or directory"
	I1019 17:17:40.713189  291722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:17:40.726406  291722 pause.go:52] kubelet running: false
	I1019 17:17:40.726469  291722 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 17:17:40.881119  291722 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 17:17:40.881211  291722 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 17:17:40.955021  291722 cri.go:89] found id: "4bc7eb843c66297e1cde0c1c9ec4523bf5b08e853c04e2abf91e040c7011df9d"
	I1019 17:17:40.955050  291722 cri.go:89] found id: "5549fb115a4d8128a67647e53adefbd5f1396f4eca49ed1c46a0a85127887340"
	I1019 17:17:40.955056  291722 cri.go:89] found id: "90c70b291b7ddbd6ff065c1772c5f6c1c6e80cc77afb11310f43bb3d05243b25"
	I1019 17:17:40.955061  291722 cri.go:89] found id: "cf5b36f1b400873a6f64ccde1cbf959c0adf80a4cbee8a27050d0adb93e938aa"
	I1019 17:17:40.955095  291722 cri.go:89] found id: "15343a83908f85231015b0d8768253b6b0aae7ec917d83ac88ef6e5b58711ebc"
	I1019 17:17:40.955101  291722 cri.go:89] found id: "6f5702f98db02fecf8ffffae08c89809549267cd099ea38ec1f43f04d2849238"
	I1019 17:17:40.955106  291722 cri.go:89] found id: "0198767b0edb6f90348a6cb47c20f3c0c5d712ddfcdc06a79eb89a2396dc856b"
	I1019 17:17:40.955110  291722 cri.go:89] found id: "98c96714927741271a866cf42303c32a2f1bcbff5d4fcfbf3eb2a3e8d6e376c1"
	I1019 17:17:40.955114  291722 cri.go:89] found id: "79c3046dfcac29d78ffef04f805bf4024716c53ca40c15dca8f18dfd42988854"
	I1019 17:17:40.955131  291722 cri.go:89] found id: "90266bf26f9f357afcc2eaa3c72132271f6bad2d3b47118e66f773e0407d9502"
	I1019 17:17:40.955136  291722 cri.go:89] found id: "955e744fa1a4514fad91045a63abd0edc7a8e64dcf7069fcb10271b34fac88fe"
	I1019 17:17:40.955140  291722 cri.go:89] found id: ""
	I1019 17:17:40.955184  291722 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 17:17:40.971565  291722 out.go:203] 
	W1019 17:17:40.973179  291722 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:17:40Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:17:40Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 17:17:40.973204  291722 out.go:285] * 
	* 
	W1019 17:17:40.978360  291722 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 17:17:40.979685  291722 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-663015 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-663015
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-663015:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8abacb4fd440310264bf5826aa4e098ead2c9c051555dedfb3490d558304cfa1",
	        "Created": "2025-10-19T17:15:37.665155013Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 274829,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T17:16:38.526270344Z",
	            "FinishedAt": "2025-10-19T17:16:37.654048946Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/8abacb4fd440310264bf5826aa4e098ead2c9c051555dedfb3490d558304cfa1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8abacb4fd440310264bf5826aa4e098ead2c9c051555dedfb3490d558304cfa1/hostname",
	        "HostsPath": "/var/lib/docker/containers/8abacb4fd440310264bf5826aa4e098ead2c9c051555dedfb3490d558304cfa1/hosts",
	        "LogPath": "/var/lib/docker/containers/8abacb4fd440310264bf5826aa4e098ead2c9c051555dedfb3490d558304cfa1/8abacb4fd440310264bf5826aa4e098ead2c9c051555dedfb3490d558304cfa1-json.log",
	        "Name": "/default-k8s-diff-port-663015",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-663015:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-663015",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8abacb4fd440310264bf5826aa4e098ead2c9c051555dedfb3490d558304cfa1",
	                "LowerDir": "/var/lib/docker/overlay2/c0ced4b65fd57ff4829b7f08104e8b5cd0e9cd252b29c14f2eeaa24cc6489ede-init/diff:/var/lib/docker/overlay2/0516a6de822f96a429e000bb6523437ca93a66a6aecaba561c6abf45d0d1defe/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c0ced4b65fd57ff4829b7f08104e8b5cd0e9cd252b29c14f2eeaa24cc6489ede/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c0ced4b65fd57ff4829b7f08104e8b5cd0e9cd252b29c14f2eeaa24cc6489ede/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c0ced4b65fd57ff4829b7f08104e8b5cd0e9cd252b29c14f2eeaa24cc6489ede/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-663015",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-663015/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-663015",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-663015",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-663015",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ed3f81f658cbe16f536c1ac747442ea27efab1429f3c5fcfd91d96e16704b896",
	            "SandboxKey": "/var/run/docker/netns/ed3f81f658cb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-663015": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "82:fb:99:d9:0d:af",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "11e31399831af0dccd7c897515d1d7c4e22e31f4e5da333490f417dfbabfda44",
	                    "EndpointID": "c99cd5f30981722c9a472b4b321225fdbfa23a3fc45505513f2c2cf11450bd38",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-663015",
	                        "8abacb4fd440"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-663015 -n default-k8s-diff-port-663015
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-663015 -n default-k8s-diff-port-663015: exit status 2 (319.403024ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-663015 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-663015 logs -n 25: (1.481660167s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p embed-certs-090139 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-090139           │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:16 UTC │
	│ start   │ -p embed-certs-090139 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-090139           │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:17 UTC │
	│ addons  │ enable metrics-server -p newest-cni-848035 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-848035            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ stop    │ -p newest-cni-848035 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-848035            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:16 UTC │
	│ addons  │ enable dashboard -p newest-cni-848035 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-848035            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:16 UTC │
	│ start   │ -p newest-cni-848035 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-848035            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:16 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-663015 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-663015 │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:16 UTC │
	│ start   │ -p default-k8s-diff-port-663015 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-663015 │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:17 UTC │
	│ image   │ newest-cni-848035 image list --format=json                                                                                                                                                                                                    │ newest-cni-848035            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:16 UTC │
	│ pause   │ -p newest-cni-848035 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-848035            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ delete  │ -p newest-cni-848035                                                                                                                                                                                                                          │ newest-cni-848035            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:16 UTC │
	│ delete  │ -p newest-cni-848035                                                                                                                                                                                                                          │ newest-cni-848035            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:16 UTC │
	│ start   │ -p auto-624324 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-624324                  │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:17 UTC │
	│ start   │ -p kubernetes-upgrade-318879 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-318879    │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ start   │ -p kubernetes-upgrade-318879 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-318879    │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:17 UTC │
	│ delete  │ -p kubernetes-upgrade-318879                                                                                                                                                                                                                  │ kubernetes-upgrade-318879    │ jenkins │ v1.37.0 │ 19 Oct 25 17:17 UTC │ 19 Oct 25 17:17 UTC │
	│ start   │ -p kindnet-624324 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio                                                                                                      │ kindnet-624324               │ jenkins │ v1.37.0 │ 19 Oct 25 17:17 UTC │                     │
	│ image   │ embed-certs-090139 image list --format=json                                                                                                                                                                                                   │ embed-certs-090139           │ jenkins │ v1.37.0 │ 19 Oct 25 17:17 UTC │ 19 Oct 25 17:17 UTC │
	│ pause   │ -p embed-certs-090139 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-090139           │ jenkins │ v1.37.0 │ 19 Oct 25 17:17 UTC │                     │
	│ delete  │ -p embed-certs-090139                                                                                                                                                                                                                         │ embed-certs-090139           │ jenkins │ v1.37.0 │ 19 Oct 25 17:17 UTC │ 19 Oct 25 17:17 UTC │
	│ delete  │ -p embed-certs-090139                                                                                                                                                                                                                         │ embed-certs-090139           │ jenkins │ v1.37.0 │ 19 Oct 25 17:17 UTC │ 19 Oct 25 17:17 UTC │
	│ start   │ -p calico-624324 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio                                                                                                        │ calico-624324                │ jenkins │ v1.37.0 │ 19 Oct 25 17:17 UTC │                     │
	│ ssh     │ -p auto-624324 pgrep -a kubelet                                                                                                                                                                                                               │ auto-624324                  │ jenkins │ v1.37.0 │ 19 Oct 25 17:17 UTC │ 19 Oct 25 17:17 UTC │
	│ image   │ default-k8s-diff-port-663015 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-663015 │ jenkins │ v1.37.0 │ 19 Oct 25 17:17 UTC │ 19 Oct 25 17:17 UTC │
	│ pause   │ -p default-k8s-diff-port-663015 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-663015 │ jenkins │ v1.37.0 │ 19 Oct 25 17:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 17:17:30
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 17:17:30.185260  289639 out.go:360] Setting OutFile to fd 1 ...
	I1019 17:17:30.185541  289639 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:17:30.185551  289639 out.go:374] Setting ErrFile to fd 2...
	I1019 17:17:30.185557  289639 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:17:30.185792  289639 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3731/.minikube/bin
	I1019 17:17:30.186331  289639 out.go:368] Setting JSON to false
	I1019 17:17:30.187545  289639 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3596,"bootTime":1760890654,"procs":341,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 17:17:30.187642  289639 start.go:143] virtualization: kvm guest
	I1019 17:17:30.189871  289639 out.go:179] * [calico-624324] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 17:17:30.191302  289639 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 17:17:30.191334  289639 notify.go:221] Checking for updates...
	I1019 17:17:30.194160  289639 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 17:17:30.195367  289639 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-3731/kubeconfig
	I1019 17:17:30.196824  289639 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-3731/.minikube
	I1019 17:17:30.197996  289639 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 17:17:30.199151  289639 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 17:17:30.200539  284195 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1019 17:17:30.200620  284195 kubeadm.go:319] [preflight] Running pre-flight checks
	I1019 17:17:30.200740  284195 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1019 17:17:30.200815  284195 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1041-gcp
	I1019 17:17:30.200888  284195 kubeadm.go:319] OS: Linux
	I1019 17:17:30.200991  284195 kubeadm.go:319] CGROUPS_CPU: enabled
	I1019 17:17:30.201097  284195 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1019 17:17:30.201179  284195 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1019 17:17:30.201247  284195 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1019 17:17:30.201426  284195 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1019 17:17:30.201499  284195 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1019 17:17:30.201563  284195 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1019 17:17:30.201632  284195 kubeadm.go:319] CGROUPS_IO: enabled
	I1019 17:17:30.201738  284195 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1019 17:17:30.201891  284195 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1019 17:17:30.202039  284195 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1019 17:17:30.202150  284195 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1019 17:17:30.203504  284195 out.go:252]   - Generating certificates and keys ...
	I1019 17:17:30.203597  284195 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1019 17:17:30.203710  284195 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1019 17:17:30.203830  284195 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1019 17:17:30.203918  284195 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1019 17:17:30.204036  284195 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1019 17:17:30.204153  284195 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1019 17:17:30.204235  284195 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1019 17:17:30.204375  284195 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kindnet-624324 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1019 17:17:30.204473  284195 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1019 17:17:30.204649  284195 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kindnet-624324 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1019 17:17:30.204713  284195 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1019 17:17:30.204766  284195 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1019 17:17:30.204803  284195 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1019 17:17:30.204877  284195 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1019 17:17:30.204963  284195 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1019 17:17:30.205050  284195 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1019 17:17:30.205155  284195 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1019 17:17:30.205236  284195 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1019 17:17:30.205287  284195 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1019 17:17:30.205379  284195 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1019 17:17:30.205454  284195 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1019 17:17:30.206741  284195 out.go:252]   - Booting up control plane ...
	I1019 17:17:30.206821  284195 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1019 17:17:30.206886  284195 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1019 17:17:30.206953  284195 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1019 17:17:30.207085  284195 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1019 17:17:30.207209  284195 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1019 17:17:30.207368  284195 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1019 17:17:30.207493  284195 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1019 17:17:30.207559  284195 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1019 17:17:30.207737  284195 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1019 17:17:30.207902  284195 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1019 17:17:30.207985  284195 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001588575s
	I1019 17:17:30.208147  284195 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1019 17:17:30.208269  284195 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1019 17:17:30.208401  284195 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1019 17:17:30.208471  284195 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1019 17:17:30.208567  284195 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.88154165s
	I1019 17:17:30.208662  284195 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.48464111s
	I1019 17:17:30.208776  284195 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.502087654s
	I1019 17:17:30.208893  284195 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1019 17:17:30.209059  284195 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1019 17:17:30.209144  284195 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1019 17:17:30.209355  284195 kubeadm.go:319] [mark-control-plane] Marking the node kindnet-624324 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1019 17:17:30.209434  284195 kubeadm.go:319] [bootstrap-token] Using token: 6l0bh9.d4pxjapp0nmt5wyg
	I1019 17:17:30.210995  284195 out.go:252]   - Configuring RBAC rules ...
	I1019 17:17:30.211179  284195 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1019 17:17:30.211302  284195 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1019 17:17:30.211491  284195 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1019 17:17:30.211676  284195 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1019 17:17:30.211844  284195 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1019 17:17:30.211981  284195 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1019 17:17:30.212174  284195 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1019 17:17:30.212236  284195 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1019 17:17:30.212305  284195 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1019 17:17:30.212314  284195 kubeadm.go:319] 
	I1019 17:17:30.212425  284195 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1019 17:17:30.212440  284195 kubeadm.go:319] 
	I1019 17:17:30.212553  284195 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1019 17:17:30.212562  284195 kubeadm.go:319] 
	I1019 17:17:30.212595  284195 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1019 17:17:30.212647  284195 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1019 17:17:30.212689  284195 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1019 17:17:30.212694  284195 kubeadm.go:319] 
	I1019 17:17:30.212736  284195 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1019 17:17:30.212744  284195 kubeadm.go:319] 
	I1019 17:17:30.212789  284195 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1019 17:17:30.212795  284195 kubeadm.go:319] 
	I1019 17:17:30.212846  284195 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1019 17:17:30.212936  284195 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1019 17:17:30.213014  284195 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1019 17:17:30.213022  284195 kubeadm.go:319] 
	I1019 17:17:30.213152  284195 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1019 17:17:30.213264  284195 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1019 17:17:30.213274  284195 kubeadm.go:319] 
	I1019 17:17:30.213424  284195 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 6l0bh9.d4pxjapp0nmt5wyg \
	I1019 17:17:30.213561  284195 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:861d72ec119aad1188a44160bac431ea1f9fef8d67df56e962f86d9bbf64f1d3 \
	I1019 17:17:30.213591  284195 kubeadm.go:319] 	--control-plane 
	I1019 17:17:30.213601  284195 kubeadm.go:319] 
	I1019 17:17:30.213710  284195 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1019 17:17:30.213720  284195 kubeadm.go:319] 
	I1019 17:17:30.213792  284195 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 6l0bh9.d4pxjapp0nmt5wyg \
	I1019 17:17:30.213915  284195 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:861d72ec119aad1188a44160bac431ea1f9fef8d67df56e962f86d9bbf64f1d3 
	I1019 17:17:30.213931  284195 cni.go:84] Creating CNI manager for "kindnet"
	I1019 17:17:30.215435  284195 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1019 17:17:30.200858  289639 config.go:182] Loaded profile config "auto-624324": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:17:30.201032  289639 config.go:182] Loaded profile config "default-k8s-diff-port-663015": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:17:30.201190  289639 config.go:182] Loaded profile config "kindnet-624324": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:17:30.201310  289639 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 17:17:30.227986  289639 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1019 17:17:30.228102  289639 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:17:30.294676  289639 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-19 17:17:30.282465244 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 17:17:30.294835  289639 docker.go:319] overlay module found
	I1019 17:17:30.296981  289639 out.go:179] * Using the docker driver based on user configuration
	I1019 17:17:30.298470  289639 start.go:309] selected driver: docker
	I1019 17:17:30.298482  289639 start.go:930] validating driver "docker" against <nil>
	I1019 17:17:30.298493  289639 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 17:17:30.299116  289639 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:17:30.365861  289639 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-19 17:17:30.35470659 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 17:17:30.366124  289639 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1019 17:17:30.366384  289639 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 17:17:30.369571  289639 out.go:179] * Using Docker driver with root privileges
	I1019 17:17:30.371091  289639 cni.go:84] Creating CNI manager for "calico"
	I1019 17:17:30.371118  289639 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I1019 17:17:30.371208  289639 start.go:353] cluster config:
	{Name:calico-624324 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-624324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:17:30.372839  289639 out.go:179] * Starting "calico-624324" primary control-plane node in "calico-624324" cluster
	I1019 17:17:30.374294  289639 cache.go:124] Beginning downloading kic base image for docker with crio
	I1019 17:17:30.376419  289639 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 17:17:30.378319  289639 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:17:30.378374  289639 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-3731/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1019 17:17:30.378385  289639 cache.go:59] Caching tarball of preloaded images
	I1019 17:17:30.378420  289639 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 17:17:30.378507  289639 preload.go:233] Found /home/jenkins/minikube-integration/21683-3731/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1019 17:17:30.378527  289639 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 17:17:30.378631  289639 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/calico-624324/config.json ...
	I1019 17:17:30.378658  289639 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/calico-624324/config.json: {Name:mkc1d18576fa2e902d7f1848da48391372f0709f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:17:30.402467  289639 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 17:17:30.402487  289639 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 17:17:30.402504  289639 cache.go:233] Successfully downloaded all kic artifacts
	I1019 17:17:30.402534  289639 start.go:360] acquireMachinesLock for calico-624324: {Name:mk2c98cc9b235a303919b952cb56e2eb1222327c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 17:17:30.402654  289639 start.go:364] duration metric: took 100.062µs to acquireMachinesLock for "calico-624324"
	I1019 17:17:30.402688  289639 start.go:93] Provisioning new machine with config: &{Name:calico-624324 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-624324 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 17:17:30.402774  289639 start.go:125] createHost starting for "" (driver="docker")
	W1019 17:17:27.947909  279986 node_ready.go:57] node "auto-624324" has "Ready":"False" status (will retry)
	W1019 17:17:29.948453  279986 node_ready.go:57] node "auto-624324" has "Ready":"False" status (will retry)
	W1019 17:17:32.448468  279986 node_ready.go:57] node "auto-624324" has "Ready":"False" status (will retry)
	I1019 17:17:30.216943  284195 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1019 17:17:30.222173  284195 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1019 17:17:30.222193  284195 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1019 17:17:30.237562  284195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1019 17:17:30.490427  284195 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1019 17:17:30.490546  284195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:17:30.490597  284195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-624324 minikube.k8s.io/updated_at=2025_10_19T17_17_30_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34 minikube.k8s.io/name=kindnet-624324 minikube.k8s.io/primary=true
	I1019 17:17:30.590937  284195 ops.go:34] apiserver oom_adj: -16
	I1019 17:17:30.591031  284195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:17:31.091534  284195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:17:31.592110  284195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:17:32.091793  284195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:17:32.591892  284195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:17:30.408239  289639 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1019 17:17:30.408537  289639 start.go:159] libmachine.API.Create for "calico-624324" (driver="docker")
	I1019 17:17:30.408578  289639 client.go:171] LocalClient.Create starting
	I1019 17:17:30.408655  289639 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem
	I1019 17:17:30.408711  289639 main.go:143] libmachine: Decoding PEM data...
	I1019 17:17:30.408745  289639 main.go:143] libmachine: Parsing certificate...
	I1019 17:17:30.408833  289639 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-3731/.minikube/certs/cert.pem
	I1019 17:17:30.408867  289639 main.go:143] libmachine: Decoding PEM data...
	I1019 17:17:30.408883  289639 main.go:143] libmachine: Parsing certificate...
	I1019 17:17:30.409393  289639 cli_runner.go:164] Run: docker network inspect calico-624324 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1019 17:17:30.430134  289639 cli_runner.go:211] docker network inspect calico-624324 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1019 17:17:30.430246  289639 network_create.go:284] running [docker network inspect calico-624324] to gather additional debugging logs...
	I1019 17:17:30.430269  289639 cli_runner.go:164] Run: docker network inspect calico-624324
	W1019 17:17:30.451573  289639 cli_runner.go:211] docker network inspect calico-624324 returned with exit code 1
	I1019 17:17:30.451634  289639 network_create.go:287] error running [docker network inspect calico-624324]: docker network inspect calico-624324: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-624324 not found
	I1019 17:17:30.451653  289639 network_create.go:289] output of [docker network inspect calico-624324]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-624324 not found
	
	** /stderr **
	I1019 17:17:30.451866  289639 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 17:17:30.472830  289639 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-96cf7041f267 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:7a:ea:91:e3:37:25} reservation:<nil>}
	I1019 17:17:30.473906  289639 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-0f2c415cfca9 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:96:f0:8a:e9:5f:de} reservation:<nil>}
	I1019 17:17:30.474899  289639 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ca739aebb768 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:a6:81:0d:b3:5e:ec} reservation:<nil>}
	I1019 17:17:30.475677  289639 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-a9c8e7e3ba20 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:3e:77:c0:aa:7f:5e} reservation:<nil>}
	I1019 17:17:30.476341  289639 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-11e31399831a IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:62:85:d0:14:cb:57} reservation:<nil>}
	I1019 17:17:30.477003  289639 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-a3eeeb5b1108 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:06:da:04:df:0e:fc} reservation:<nil>}
	I1019 17:17:30.477817  289639 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f64ce0}
	I1019 17:17:30.477842  289639 network_create.go:124] attempt to create docker network calico-624324 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1019 17:17:30.477889  289639 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-624324 calico-624324
	I1019 17:17:30.555614  289639 network_create.go:108] docker network calico-624324 192.168.103.0/24 created
	I1019 17:17:30.555651  289639 kic.go:121] calculated static IP "192.168.103.2" for the "calico-624324" container
	I1019 17:17:30.555809  289639 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1019 17:17:30.577790  289639 cli_runner.go:164] Run: docker volume create calico-624324 --label name.minikube.sigs.k8s.io=calico-624324 --label created_by.minikube.sigs.k8s.io=true
	I1019 17:17:30.601230  289639 oci.go:103] Successfully created a docker volume calico-624324
	I1019 17:17:30.601299  289639 cli_runner.go:164] Run: docker run --rm --name calico-624324-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-624324 --entrypoint /usr/bin/test -v calico-624324:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1019 17:17:31.008663  289639 oci.go:107] Successfully prepared a docker volume calico-624324
	I1019 17:17:31.008716  289639 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:17:31.008741  289639 kic.go:194] Starting extracting preloaded images to volume ...
	I1019 17:17:31.008790  289639 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-3731/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-624324:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1019 17:17:33.091606  284195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:17:33.591188  284195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:17:34.091214  284195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:17:34.592036  284195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:17:35.091193  284195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:17:35.591965  284195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:17:35.688361  284195 kubeadm.go:1114] duration metric: took 5.197878626s to wait for elevateKubeSystemPrivileges
	I1019 17:17:35.688391  284195 kubeadm.go:403] duration metric: took 16.642563618s to StartCluster
	I1019 17:17:35.688408  284195 settings.go:142] acquiring lock: {Name:mk205bd8d663b4a1850184d0f589700c0d2429b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:17:35.688469  284195 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-3731/kubeconfig
	I1019 17:17:35.689712  284195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/kubeconfig: {Name:mk4cdfafbafeae33568865a60cf929c656d55c5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:17:35.689929  284195 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1019 17:17:35.689952  284195 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 17:17:35.689925  284195 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 17:17:35.690044  284195 addons.go:70] Setting storage-provisioner=true in profile "kindnet-624324"
	I1019 17:17:35.690060  284195 addons.go:239] Setting addon storage-provisioner=true in "kindnet-624324"
	I1019 17:17:35.690102  284195 host.go:66] Checking if "kindnet-624324" exists ...
	I1019 17:17:35.690110  284195 addons.go:70] Setting default-storageclass=true in profile "kindnet-624324"
	I1019 17:17:35.690129  284195 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kindnet-624324"
	I1019 17:17:35.690143  284195 config.go:182] Loaded profile config "kindnet-624324": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:17:35.690450  284195 cli_runner.go:164] Run: docker container inspect kindnet-624324 --format={{.State.Status}}
	I1019 17:17:35.690583  284195 cli_runner.go:164] Run: docker container inspect kindnet-624324 --format={{.State.Status}}
	I1019 17:17:35.691861  284195 out.go:179] * Verifying Kubernetes components...
	I1019 17:17:35.695143  284195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:17:35.720116  284195 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 17:17:35.721460  284195 addons.go:239] Setting addon default-storageclass=true in "kindnet-624324"
	I1019 17:17:35.721498  284195 host.go:66] Checking if "kindnet-624324" exists ...
	I1019 17:17:35.721743  284195 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:17:35.721783  284195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 17:17:35.721842  284195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-624324
	I1019 17:17:35.721973  284195 cli_runner.go:164] Run: docker container inspect kindnet-624324 --format={{.State.Status}}
	I1019 17:17:35.755031  284195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/kindnet-624324/id_rsa Username:docker}
	I1019 17:17:35.756465  284195 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 17:17:35.756504  284195 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 17:17:35.756709  284195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-624324
	I1019 17:17:35.784975  284195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/kindnet-624324/id_rsa Username:docker}
	I1019 17:17:35.816556  284195 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1019 17:17:35.872892  284195 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:17:35.915310  284195 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:17:35.933958  284195 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 17:17:36.050912  284195 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1019 17:17:36.054196  284195 node_ready.go:35] waiting up to 15m0s for node "kindnet-624324" to be "Ready" ...
	I1019 17:17:36.319840  284195 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1019 17:17:34.537344  279986 node_ready.go:57] node "auto-624324" has "Ready":"False" status (will retry)
	I1019 17:17:35.084309  279986 node_ready.go:49] node "auto-624324" is "Ready"
	I1019 17:17:35.084388  279986 node_ready.go:38] duration metric: took 11.139734674s for node "auto-624324" to be "Ready" ...
	I1019 17:17:35.084409  279986 api_server.go:52] waiting for apiserver process to appear ...
	I1019 17:17:35.084476  279986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 17:17:35.101369  279986 api_server.go:72] duration metric: took 11.73260785s to wait for apiserver process to appear ...
	I1019 17:17:35.101391  279986 api_server.go:88] waiting for apiserver healthz status ...
	I1019 17:17:35.101413  279986 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1019 17:17:35.106137  279986 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1019 17:17:35.107271  279986 api_server.go:141] control plane version: v1.34.1
	I1019 17:17:35.107294  279986 api_server.go:131] duration metric: took 5.897803ms to wait for apiserver health ...
	I1019 17:17:35.107304  279986 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 17:17:35.195018  279986 system_pods.go:59] 8 kube-system pods found
	I1019 17:17:35.195089  279986 system_pods.go:61] "coredns-66bc5c9577-5mktl" [86e6103e-b259-44eb-bda7-608ba13635ea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:17:35.195110  279986 system_pods.go:61] "etcd-auto-624324" [d10579f9-f659-4ffd-b07f-0bccb7764993] Running
	I1019 17:17:35.195121  279986 system_pods.go:61] "kindnet-sn8ll" [89a2958a-6f16-45e7-95a8-c808138daf21] Running
	I1019 17:17:35.195133  279986 system_pods.go:61] "kube-apiserver-auto-624324" [8c7a6aad-aa83-4c53-8d67-eec9eb82ed6a] Running
	I1019 17:17:35.195142  279986 system_pods.go:61] "kube-controller-manager-auto-624324" [0fd236da-56f0-4df1-8bc4-820380e4d3d2] Running
	I1019 17:17:35.195148  279986 system_pods.go:61] "kube-proxy-84x4j" [038b0ec3-1c9b-4773-b315-7e649f429afb] Running
	I1019 17:17:35.195156  279986 system_pods.go:61] "kube-scheduler-auto-624324" [a8e7de84-30d1-4e87-b5ad-e36e36b56c20] Running
	I1019 17:17:35.195164  279986 system_pods.go:61] "storage-provisioner" [23f97584-2cde-4e7e-90fd-b78f5809de66] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 17:17:35.195179  279986 system_pods.go:74] duration metric: took 87.868832ms to wait for pod list to return data ...
	I1019 17:17:35.195199  279986 default_sa.go:34] waiting for default service account to be created ...
	I1019 17:17:35.199139  279986 default_sa.go:45] found service account: "default"
	I1019 17:17:35.199168  279986 default_sa.go:55] duration metric: took 3.962279ms for default service account to be created ...
	I1019 17:17:35.199181  279986 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 17:17:35.202043  279986 system_pods.go:86] 8 kube-system pods found
	I1019 17:17:35.202090  279986 system_pods.go:89] "coredns-66bc5c9577-5mktl" [86e6103e-b259-44eb-bda7-608ba13635ea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:17:35.202099  279986 system_pods.go:89] "etcd-auto-624324" [d10579f9-f659-4ffd-b07f-0bccb7764993] Running
	I1019 17:17:35.202107  279986 system_pods.go:89] "kindnet-sn8ll" [89a2958a-6f16-45e7-95a8-c808138daf21] Running
	I1019 17:17:35.202112  279986 system_pods.go:89] "kube-apiserver-auto-624324" [8c7a6aad-aa83-4c53-8d67-eec9eb82ed6a] Running
	I1019 17:17:35.202118  279986 system_pods.go:89] "kube-controller-manager-auto-624324" [0fd236da-56f0-4df1-8bc4-820380e4d3d2] Running
	I1019 17:17:35.202123  279986 system_pods.go:89] "kube-proxy-84x4j" [038b0ec3-1c9b-4773-b315-7e649f429afb] Running
	I1019 17:17:35.202128  279986 system_pods.go:89] "kube-scheduler-auto-624324" [a8e7de84-30d1-4e87-b5ad-e36e36b56c20] Running
	I1019 17:17:35.202136  279986 system_pods.go:89] "storage-provisioner" [23f97584-2cde-4e7e-90fd-b78f5809de66] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 17:17:35.202162  279986 retry.go:31] will retry after 285.532284ms: missing components: kube-dns
	I1019 17:17:35.498758  279986 system_pods.go:86] 8 kube-system pods found
	I1019 17:17:35.498811  279986 system_pods.go:89] "coredns-66bc5c9577-5mktl" [86e6103e-b259-44eb-bda7-608ba13635ea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:17:35.498819  279986 system_pods.go:89] "etcd-auto-624324" [d10579f9-f659-4ffd-b07f-0bccb7764993] Running
	I1019 17:17:35.498827  279986 system_pods.go:89] "kindnet-sn8ll" [89a2958a-6f16-45e7-95a8-c808138daf21] Running
	I1019 17:17:35.498832  279986 system_pods.go:89] "kube-apiserver-auto-624324" [8c7a6aad-aa83-4c53-8d67-eec9eb82ed6a] Running
	I1019 17:17:35.498839  279986 system_pods.go:89] "kube-controller-manager-auto-624324" [0fd236da-56f0-4df1-8bc4-820380e4d3d2] Running
	I1019 17:17:35.498844  279986 system_pods.go:89] "kube-proxy-84x4j" [038b0ec3-1c9b-4773-b315-7e649f429afb] Running
	I1019 17:17:35.498849  279986 system_pods.go:89] "kube-scheduler-auto-624324" [a8e7de84-30d1-4e87-b5ad-e36e36b56c20] Running
	I1019 17:17:35.498856  279986 system_pods.go:89] "storage-provisioner" [23f97584-2cde-4e7e-90fd-b78f5809de66] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 17:17:35.498874  279986 retry.go:31] will retry after 362.776811ms: missing components: kube-dns
	I1019 17:17:35.867039  279986 system_pods.go:86] 8 kube-system pods found
	I1019 17:17:35.867188  279986 system_pods.go:89] "coredns-66bc5c9577-5mktl" [86e6103e-b259-44eb-bda7-608ba13635ea] Running
	I1019 17:17:35.867201  279986 system_pods.go:89] "etcd-auto-624324" [d10579f9-f659-4ffd-b07f-0bccb7764993] Running
	I1019 17:17:35.867223  279986 system_pods.go:89] "kindnet-sn8ll" [89a2958a-6f16-45e7-95a8-c808138daf21] Running
	I1019 17:17:35.867232  279986 system_pods.go:89] "kube-apiserver-auto-624324" [8c7a6aad-aa83-4c53-8d67-eec9eb82ed6a] Running
	I1019 17:17:35.867240  279986 system_pods.go:89] "kube-controller-manager-auto-624324" [0fd236da-56f0-4df1-8bc4-820380e4d3d2] Running
	I1019 17:17:35.867251  279986 system_pods.go:89] "kube-proxy-84x4j" [038b0ec3-1c9b-4773-b315-7e649f429afb] Running
	I1019 17:17:35.867258  279986 system_pods.go:89] "kube-scheduler-auto-624324" [a8e7de84-30d1-4e87-b5ad-e36e36b56c20] Running
	I1019 17:17:35.867268  279986 system_pods.go:89] "storage-provisioner" [23f97584-2cde-4e7e-90fd-b78f5809de66] Running
	I1019 17:17:35.867279  279986 system_pods.go:126] duration metric: took 668.091061ms to wait for k8s-apps to be running ...
	I1019 17:17:35.867330  279986 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 17:17:35.867387  279986 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:17:35.888132  279986 system_svc.go:56] duration metric: took 20.734531ms WaitForService to wait for kubelet
	I1019 17:17:35.888168  279986 kubeadm.go:587] duration metric: took 12.519409433s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 17:17:35.888201  279986 node_conditions.go:102] verifying NodePressure condition ...
	I1019 17:17:35.894437  279986 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1019 17:17:35.894471  279986 node_conditions.go:123] node cpu capacity is 8
	I1019 17:17:35.894486  279986 node_conditions.go:105] duration metric: took 6.277959ms to run NodePressure ...
	I1019 17:17:35.894501  279986 start.go:242] waiting for startup goroutines ...
	I1019 17:17:35.894512  279986 start.go:247] waiting for cluster config update ...
	I1019 17:17:35.894530  279986 start.go:256] writing updated cluster config ...
	I1019 17:17:35.894852  279986 ssh_runner.go:195] Run: rm -f paused
	I1019 17:17:35.905871  279986 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 17:17:35.967887  279986 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5mktl" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:17:35.974949  279986 pod_ready.go:94] pod "coredns-66bc5c9577-5mktl" is "Ready"
	I1019 17:17:35.974992  279986 pod_ready.go:86] duration metric: took 7.06813ms for pod "coredns-66bc5c9577-5mktl" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:17:35.977774  279986 pod_ready.go:83] waiting for pod "etcd-auto-624324" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:17:35.982121  279986 pod_ready.go:94] pod "etcd-auto-624324" is "Ready"
	I1019 17:17:35.982190  279986 pod_ready.go:86] duration metric: took 4.395149ms for pod "etcd-auto-624324" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:17:35.984377  279986 pod_ready.go:83] waiting for pod "kube-apiserver-auto-624324" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:17:35.989387  279986 pod_ready.go:94] pod "kube-apiserver-auto-624324" is "Ready"
	I1019 17:17:35.989409  279986 pod_ready.go:86] duration metric: took 4.956684ms for pod "kube-apiserver-auto-624324" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:17:35.991486  279986 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-624324" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:17:36.312207  279986 pod_ready.go:94] pod "kube-controller-manager-auto-624324" is "Ready"
	I1019 17:17:36.312239  279986 pod_ready.go:86] duration metric: took 320.727534ms for pod "kube-controller-manager-auto-624324" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:17:36.511302  279986 pod_ready.go:83] waiting for pod "kube-proxy-84x4j" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:17:36.910102  279986 pod_ready.go:94] pod "kube-proxy-84x4j" is "Ready"
	I1019 17:17:36.910127  279986 pod_ready.go:86] duration metric: took 398.801949ms for pod "kube-proxy-84x4j" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:17:37.111777  279986 pod_ready.go:83] waiting for pod "kube-scheduler-auto-624324" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:17:37.510882  279986 pod_ready.go:94] pod "kube-scheduler-auto-624324" is "Ready"
	I1019 17:17:37.510913  279986 pod_ready.go:86] duration metric: took 399.110949ms for pod "kube-scheduler-auto-624324" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:17:37.510932  279986 pod_ready.go:40] duration metric: took 1.605021737s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 17:17:37.557619  279986 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1019 17:17:37.560255  279986 out.go:179] * Done! kubectl is now configured to use "auto-624324" cluster and "default" namespace by default
	I1019 17:17:36.320995  284195 addons.go:515] duration metric: took 631.042942ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1019 17:17:36.556029  284195 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-624324" context rescaled to 1 replicas
	I1019 17:17:35.523612  289639 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-3731/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-624324:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.514758827s)
	I1019 17:17:35.523650  289639 kic.go:203] duration metric: took 4.514904591s to extract preloaded images to volume ...
	W1019 17:17:35.523800  289639 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1019 17:17:35.523841  289639 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1019 17:17:35.523895  289639 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1019 17:17:35.625654  289639 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-624324 --name calico-624324 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-624324 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-624324 --network calico-624324 --ip 192.168.103.2 --volume calico-624324:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1019 17:17:36.033377  289639 cli_runner.go:164] Run: docker container inspect calico-624324 --format={{.State.Running}}
	I1019 17:17:36.060470  289639 cli_runner.go:164] Run: docker container inspect calico-624324 --format={{.State.Status}}
	I1019 17:17:36.086184  289639 cli_runner.go:164] Run: docker exec calico-624324 stat /var/lib/dpkg/alternatives/iptables
	I1019 17:17:36.142694  289639 oci.go:144] the created container "calico-624324" has a running status.
	I1019 17:17:36.142727  289639 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-3731/.minikube/machines/calico-624324/id_rsa...
	I1019 17:17:36.226603  289639 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-3731/.minikube/machines/calico-624324/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1019 17:17:36.255542  289639 cli_runner.go:164] Run: docker container inspect calico-624324 --format={{.State.Status}}
	I1019 17:17:36.279260  289639 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1019 17:17:36.279289  289639 kic_runner.go:114] Args: [docker exec --privileged calico-624324 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1019 17:17:36.338266  289639 cli_runner.go:164] Run: docker container inspect calico-624324 --format={{.State.Status}}
	I1019 17:17:36.357462  289639 machine.go:94] provisionDockerMachine start ...
	I1019 17:17:36.357558  289639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-624324
	I1019 17:17:36.378591  289639 main.go:143] libmachine: Using SSH client type: native
	I1019 17:17:36.378871  289639 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33114 <nil> <nil>}
	I1019 17:17:36.378888  289639 main.go:143] libmachine: About to run SSH command:
	hostname
	I1019 17:17:36.379632  289639 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34746->127.0.0.1:33114: read: connection reset by peer
	I1019 17:17:39.526597  289639 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-624324
	
	I1019 17:17:39.526623  289639 ubuntu.go:182] provisioning hostname "calico-624324"
	I1019 17:17:39.526695  289639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-624324
	I1019 17:17:39.548785  289639 main.go:143] libmachine: Using SSH client type: native
	I1019 17:17:39.549103  289639 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33114 <nil> <nil>}
	I1019 17:17:39.549123  289639 main.go:143] libmachine: About to run SSH command:
	sudo hostname calico-624324 && echo "calico-624324" | sudo tee /etc/hostname
	I1019 17:17:39.707759  289639 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-624324
	
	I1019 17:17:39.707844  289639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-624324
	I1019 17:17:39.731441  289639 main.go:143] libmachine: Using SSH client type: native
	I1019 17:17:39.731756  289639 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33114 <nil> <nil>}
	I1019 17:17:39.731781  289639 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-624324' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-624324/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-624324' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 17:17:39.877027  289639 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1019 17:17:39.877086  289639 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-3731/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-3731/.minikube}
	I1019 17:17:39.877127  289639 ubuntu.go:190] setting up certificates
	I1019 17:17:39.877151  289639 provision.go:84] configureAuth start
	I1019 17:17:39.877226  289639 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-624324
	I1019 17:17:39.898226  289639 provision.go:143] copyHostCerts
	I1019 17:17:39.898290  289639 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-3731/.minikube/ca.pem, removing ...
	I1019 17:17:39.898302  289639 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-3731/.minikube/ca.pem
	I1019 17:17:39.898375  289639 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-3731/.minikube/ca.pem (1082 bytes)
	I1019 17:17:39.898497  289639 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-3731/.minikube/cert.pem, removing ...
	I1019 17:17:39.898509  289639 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-3731/.minikube/cert.pem
	I1019 17:17:39.898554  289639 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-3731/.minikube/cert.pem (1123 bytes)
	I1019 17:17:39.898660  289639 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-3731/.minikube/key.pem, removing ...
	I1019 17:17:39.898673  289639 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-3731/.minikube/key.pem
	I1019 17:17:39.898718  289639 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-3731/.minikube/key.pem (1679 bytes)
	I1019 17:17:39.898808  289639 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-3731/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca-key.pem org=jenkins.calico-624324 san=[127.0.0.1 192.168.103.2 calico-624324 localhost minikube]
	
	
	==> CRI-O <==
	Oct 19 17:17:03 default-k8s-diff-port-663015 crio[552]: time="2025-10-19T17:17:03.367810595Z" level=info msg="Started container" PID=1713 containerID=18a1535c659f9b363f7d88e14a775d4db65c71f30fad2d40f5abac26ce73a81e description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wz2k5/dashboard-metrics-scraper id=f04e35b7-c21a-4434-9681-c85bf9715924 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6af35ef9d2a2159e56fddc5247be1a66d40c981bd05b4f663689615210175014
	Oct 19 17:17:04 default-k8s-diff-port-663015 crio[552]: time="2025-10-19T17:17:04.329452873Z" level=info msg="Removing container: 80c72d2cb2d1b3ff54ddd4ed079aa6d16f2bebd0cb99d9dd7464a60d06e79a77" id=88420c50-0d07-4321-b8f2-9d3946fe92ab name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 17:17:04 default-k8s-diff-port-663015 crio[552]: time="2025-10-19T17:17:04.339468195Z" level=info msg="Removed container 80c72d2cb2d1b3ff54ddd4ed079aa6d16f2bebd0cb99d9dd7464a60d06e79a77: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wz2k5/dashboard-metrics-scraper" id=88420c50-0d07-4321-b8f2-9d3946fe92ab name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 17:17:20 default-k8s-diff-port-663015 crio[552]: time="2025-10-19T17:17:20.374922553Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=2c74a611-197b-4a70-be2a-7fe30bdf1e62 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:17:20 default-k8s-diff-port-663015 crio[552]: time="2025-10-19T17:17:20.376105319Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=d68b9017-011b-4943-85a7-d3e1b56ec779 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:17:20 default-k8s-diff-port-663015 crio[552]: time="2025-10-19T17:17:20.377577433Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=95de63c1-55ea-4e74-8628-692bab90b918 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:17:20 default-k8s-diff-port-663015 crio[552]: time="2025-10-19T17:17:20.377847054Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:17:20 default-k8s-diff-port-663015 crio[552]: time="2025-10-19T17:17:20.382437547Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:17:20 default-k8s-diff-port-663015 crio[552]: time="2025-10-19T17:17:20.382849775Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/a16bd65f5a14248e967fc52c24b65d730a21da85271a68c2aa835878beca85cd/merged/etc/passwd: no such file or directory"
	Oct 19 17:17:20 default-k8s-diff-port-663015 crio[552]: time="2025-10-19T17:17:20.382879286Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/a16bd65f5a14248e967fc52c24b65d730a21da85271a68c2aa835878beca85cd/merged/etc/group: no such file or directory"
	Oct 19 17:17:20 default-k8s-diff-port-663015 crio[552]: time="2025-10-19T17:17:20.383313416Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:17:20 default-k8s-diff-port-663015 crio[552]: time="2025-10-19T17:17:20.414736647Z" level=info msg="Created container 4bc7eb843c66297e1cde0c1c9ec4523bf5b08e853c04e2abf91e040c7011df9d: kube-system/storage-provisioner/storage-provisioner" id=95de63c1-55ea-4e74-8628-692bab90b918 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:17:20 default-k8s-diff-port-663015 crio[552]: time="2025-10-19T17:17:20.415721707Z" level=info msg="Starting container: 4bc7eb843c66297e1cde0c1c9ec4523bf5b08e853c04e2abf91e040c7011df9d" id=25d03c06-65fe-441c-8c92-50b42ab21dbb name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 17:17:20 default-k8s-diff-port-663015 crio[552]: time="2025-10-19T17:17:20.418189609Z" level=info msg="Started container" PID=1727 containerID=4bc7eb843c66297e1cde0c1c9ec4523bf5b08e853c04e2abf91e040c7011df9d description=kube-system/storage-provisioner/storage-provisioner id=25d03c06-65fe-441c-8c92-50b42ab21dbb name=/runtime.v1.RuntimeService/StartContainer sandboxID=2901d7666db5f343407709b25c0673c122903f1b8623ee9e685a5121d48921f5
	Oct 19 17:17:25 default-k8s-diff-port-663015 crio[552]: time="2025-10-19T17:17:25.225547156Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=bf653993-975b-453d-ae37-bdb394a7f960 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:17:25 default-k8s-diff-port-663015 crio[552]: time="2025-10-19T17:17:25.22650212Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=578b51f5-970a-4a8e-a856-52df2c14a5f4 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:17:25 default-k8s-diff-port-663015 crio[552]: time="2025-10-19T17:17:25.227593849Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wz2k5/dashboard-metrics-scraper" id=4d6491c8-8433-46f3-86e1-268b0e9c967d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:17:25 default-k8s-diff-port-663015 crio[552]: time="2025-10-19T17:17:25.227809472Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:17:25 default-k8s-diff-port-663015 crio[552]: time="2025-10-19T17:17:25.233437488Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:17:25 default-k8s-diff-port-663015 crio[552]: time="2025-10-19T17:17:25.234138992Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:17:25 default-k8s-diff-port-663015 crio[552]: time="2025-10-19T17:17:25.263734099Z" level=info msg="Created container 90266bf26f9f357afcc2eaa3c72132271f6bad2d3b47118e66f773e0407d9502: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wz2k5/dashboard-metrics-scraper" id=4d6491c8-8433-46f3-86e1-268b0e9c967d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:17:25 default-k8s-diff-port-663015 crio[552]: time="2025-10-19T17:17:25.264439498Z" level=info msg="Starting container: 90266bf26f9f357afcc2eaa3c72132271f6bad2d3b47118e66f773e0407d9502" id=24da0f3a-e842-4daf-8794-19a8433b9ef6 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 17:17:25 default-k8s-diff-port-663015 crio[552]: time="2025-10-19T17:17:25.266603348Z" level=info msg="Started container" PID=1741 containerID=90266bf26f9f357afcc2eaa3c72132271f6bad2d3b47118e66f773e0407d9502 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wz2k5/dashboard-metrics-scraper id=24da0f3a-e842-4daf-8794-19a8433b9ef6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6af35ef9d2a2159e56fddc5247be1a66d40c981bd05b4f663689615210175014
	Oct 19 17:17:25 default-k8s-diff-port-663015 crio[552]: time="2025-10-19T17:17:25.394729515Z" level=info msg="Removing container: 18a1535c659f9b363f7d88e14a775d4db65c71f30fad2d40f5abac26ce73a81e" id=75fc2e8b-4278-478c-b311-8e4dbec278c0 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 17:17:25 default-k8s-diff-port-663015 crio[552]: time="2025-10-19T17:17:25.408191429Z" level=info msg="Removed container 18a1535c659f9b363f7d88e14a775d4db65c71f30fad2d40f5abac26ce73a81e: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wz2k5/dashboard-metrics-scraper" id=75fc2e8b-4278-478c-b311-8e4dbec278c0 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	90266bf26f9f3       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           16 seconds ago      Exited              dashboard-metrics-scraper   2                   6af35ef9d2a21       dashboard-metrics-scraper-6ffb444bf9-wz2k5             kubernetes-dashboard
	4bc7eb843c662       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 seconds ago      Running             storage-provisioner         1                   2901d7666db5f       storage-provisioner                                    kube-system
	955e744fa1a45       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   42 seconds ago      Running             kubernetes-dashboard        0                   065342cb78138       kubernetes-dashboard-855c9754f9-kr5fp                  kubernetes-dashboard
	2429e7f2aaaa2       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           52 seconds ago      Running             busybox                     1                   48b9efd29f126       busybox                                                default
	5549fb115a4d8       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           52 seconds ago      Running             coredns                     0                   7bba9b91268ce       coredns-66bc5c9577-2r8tf                               kube-system
	90c70b291b7dd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Exited              storage-provisioner         0                   2901d7666db5f       storage-provisioner                                    kube-system
	cf5b36f1b4008       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           52 seconds ago      Running             kube-proxy                  0                   f7787aa1db132       kube-proxy-g62dn                                       kube-system
	15343a83908f8       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           52 seconds ago      Running             kindnet-cni                 0                   64e6f770e0171       kindnet-rrthg                                          kube-system
	6f5702f98db02       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           55 seconds ago      Running             kube-apiserver              0                   9fb8b70fb0e4a       kube-apiserver-default-k8s-diff-port-663015            kube-system
	0198767b0edb6       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           55 seconds ago      Running             kube-scheduler              0                   9ecb9760ac032       kube-scheduler-default-k8s-diff-port-663015            kube-system
	98c9671492774       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           55 seconds ago      Running             kube-controller-manager     0                   59c38f3fffda0       kube-controller-manager-default-k8s-diff-port-663015   kube-system
	79c3046dfcac2       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           55 seconds ago      Running             etcd                        0                   f720bde5c5e57       etcd-default-k8s-diff-port-663015                      kube-system
	
	
	==> coredns [5549fb115a4d8128a67647e53adefbd5f1396f4eca49ed1c46a0a85127887340] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48677 - 18927 "HINFO IN 1165854263028167530.7393025748261522906. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.06438571s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-663015
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-663015
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34
	                    minikube.k8s.io/name=default-k8s-diff-port-663015
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T17_15_54_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 17:15:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-663015
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 17:17:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 17:17:19 +0000   Sun, 19 Oct 2025 17:15:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 17:17:19 +0000   Sun, 19 Oct 2025 17:15:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 17:17:19 +0000   Sun, 19 Oct 2025 17:15:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 17:17:19 +0000   Sun, 19 Oct 2025 17:16:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-663015
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                e7d4d908-64b0-4858-bf62-c6148a998433
	  Boot ID:                    74c6737e-3c93-45ee-b810-4373d2d70b9b
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-66bc5c9577-2r8tf                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     104s
	  kube-system                 etcd-default-k8s-diff-port-663015                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         111s
	  kube-system                 kindnet-rrthg                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      104s
	  kube-system                 kube-apiserver-default-k8s-diff-port-663015             250m (3%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-663015    200m (2%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-proxy-g62dn                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-scheduler-default-k8s-diff-port-663015             100m (1%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-wz2k5              0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-kr5fp                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 103s               kube-proxy       
	  Normal  Starting                 52s                kube-proxy       
	  Normal  NodeHasSufficientMemory  109s               kubelet          Node default-k8s-diff-port-663015 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    109s               kubelet          Node default-k8s-diff-port-663015 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     109s               kubelet          Node default-k8s-diff-port-663015 status is now: NodeHasSufficientPID
	  Normal  Starting                 109s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           105s               node-controller  Node default-k8s-diff-port-663015 event: Registered Node default-k8s-diff-port-663015 in Controller
	  Normal  NodeReady                93s                kubelet          Node default-k8s-diff-port-663015 status is now: NodeReady
	  Normal  Starting                 56s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  56s (x8 over 56s)  kubelet          Node default-k8s-diff-port-663015 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    56s (x8 over 56s)  kubelet          Node default-k8s-diff-port-663015 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s (x8 over 56s)  kubelet          Node default-k8s-diff-port-663015 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           50s                node-controller  Node default-k8s-diff-port-663015 event: Registered Node default-k8s-diff-port-663015 in Controller
	
	
	==> dmesg <==
	[  +0.102617] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027869] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.458770] kauditd_printk_skb: 47 callbacks suppressed
	[Oct19 16:23] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.054620] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023908] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023878] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023848] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023943] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +2.047764] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +4.031517] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +8.127172] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[ +16.382276] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[Oct19 16:24] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	
	
	==> etcd [79c3046dfcac29d78ffef04f805bf4024716c53ca40c15dca8f18dfd42988854] <==
	{"level":"warn","ts":"2025-10-19T17:16:48.059221Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:48.074854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:48.082853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:48.092009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:48.100646Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:48.109542Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:48.116472Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:48.124954Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:48.133429Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:48.141663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:48.149163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:48.157292Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:48.164890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:48.172527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:48.187004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:48.194575Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:48.202248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:48.282974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33834","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-19T17:16:57.501369Z","caller":"traceutil/trace.go:172","msg":"trace[139943372] transaction","detail":"{read_only:false; response_revision:560; number_of_response:1; }","duration":"126.124889ms","start":"2025-10-19T17:16:57.375222Z","end":"2025-10-19T17:16:57.501347Z","steps":["trace[139943372] 'process raft request'  (duration: 125.997011ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T17:16:59.042403Z","caller":"traceutil/trace.go:172","msg":"trace[1399571219] transaction","detail":"{read_only:false; response_revision:562; number_of_response:1; }","duration":"125.448371ms","start":"2025-10-19T17:16:58.916918Z","end":"2025-10-19T17:16:59.042367Z","steps":["trace[1399571219] 'process raft request'  (duration: 125.336826ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T17:16:59.502012Z","caller":"traceutil/trace.go:172","msg":"trace[1774317239] linearizableReadLoop","detail":"{readStateIndex:593; appliedIndex:593; }","duration":"131.644674ms","start":"2025-10-19T17:16:59.370341Z","end":"2025-10-19T17:16:59.501986Z","steps":["trace[1774317239] 'read index received'  (duration: 131.634571ms)","trace[1774317239] 'applied index is now lower than readState.Index'  (duration: 8.784µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-19T17:16:59.542446Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"172.048651ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-19T17:16:59.542541Z","caller":"traceutil/trace.go:172","msg":"trace[1086183679] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:565; }","duration":"172.192772ms","start":"2025-10-19T17:16:59.370331Z","end":"2025-10-19T17:16:59.542524Z","steps":["trace[1086183679] 'agreement among raft nodes before linearized reading'  (duration: 131.748702ms)","trace[1086183679] 'range keys from in-memory index tree'  (duration: 40.264735ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-19T17:16:59.542626Z","caller":"traceutil/trace.go:172","msg":"trace[601624699] transaction","detail":"{read_only:false; response_revision:567; number_of_response:1; }","duration":"146.756806ms","start":"2025-10-19T17:16:59.395857Z","end":"2025-10-19T17:16:59.542613Z","steps":["trace[601624699] 'process raft request'  (duration: 146.707103ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T17:16:59.542783Z","caller":"traceutil/trace.go:172","msg":"trace[282571166] transaction","detail":"{read_only:false; response_revision:566; number_of_response:1; }","duration":"251.26124ms","start":"2025-10-19T17:16:59.291503Z","end":"2025-10-19T17:16:59.542764Z","steps":["trace[282571166] 'process raft request'  (duration: 210.542951ms)","trace[282571166] 'compare'  (duration: 40.364474ms)"],"step_count":2}
	
	
	==> kernel <==
	 17:17:42 up  1:00,  0 user,  load average: 5.37, 3.64, 2.15
	Linux default-k8s-diff-port-663015 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [15343a83908f85231015b0d8768253b6b0aae7ec917d83ac88ef6e5b58711ebc] <==
	I1019 17:16:49.844752       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 17:16:49.845031       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1019 17:16:49.845218       1 main.go:148] setting mtu 1500 for CNI 
	I1019 17:16:49.845249       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 17:16:49.845274       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T17:16:50Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 17:16:50.045316       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 17:16:50.045354       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 17:16:50.045378       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 17:16:50.139252       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1019 17:16:50.589041       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 17:16:50.589096       1 metrics.go:72] Registering metrics
	I1019 17:16:50.589188       1 controller.go:711] "Syncing nftables rules"
	I1019 17:17:00.047164       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 17:17:00.047311       1 main.go:301] handling current node
	I1019 17:17:10.048727       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 17:17:10.048766       1 main.go:301] handling current node
	I1019 17:17:20.045239       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 17:17:20.045297       1 main.go:301] handling current node
	I1019 17:17:30.048140       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 17:17:30.048180       1 main.go:301] handling current node
	I1019 17:17:40.052149       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 17:17:40.052190       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6f5702f98db02fecf8ffffae08c89809549267cd099ea38ec1f43f04d2849238] <==
	I1019 17:16:48.817861       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1019 17:16:48.818212       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1019 17:16:48.819881       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1019 17:16:48.820081       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1019 17:16:48.820215       1 aggregator.go:171] initial CRD sync complete...
	I1019 17:16:48.820256       1 autoregister_controller.go:144] Starting autoregister controller
	I1019 17:16:48.820282       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1019 17:16:48.820291       1 cache.go:39] Caches are synced for autoregister controller
	I1019 17:16:48.826138       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1019 17:16:48.838170       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1019 17:16:48.847099       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1019 17:16:48.847138       1 policy_source.go:240] refreshing policies
	I1019 17:16:48.870775       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 17:16:49.160747       1 controller.go:667] quota admission added evaluator for: namespaces
	I1019 17:16:49.189719       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1019 17:16:49.210944       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 17:16:49.220996       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 17:16:49.235229       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 17:16:49.306346       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.246.23"}
	I1019 17:16:49.322180       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.157.229"}
	I1019 17:16:49.720717       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 17:16:52.395445       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 17:16:52.546083       1 controller.go:667] quota admission added evaluator for: endpoints
	I1019 17:16:52.546100       1 controller.go:667] quota admission added evaluator for: endpoints
	I1019 17:16:52.596659       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [98c96714927741271a866cf42303c32a2f1bcbff5d4fcfbf3eb2a3e8d6e376c1] <==
	I1019 17:16:52.112058       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1019 17:16:52.112134       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1019 17:16:52.112141       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1019 17:16:52.112148       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1019 17:16:52.113248       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1019 17:16:52.115441       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1019 17:16:52.116906       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1019 17:16:52.142300       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1019 17:16:52.142395       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1019 17:16:52.142423       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1019 17:16:52.142477       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1019 17:16:52.142496       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1019 17:16:52.142371       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1019 17:16:52.142384       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1019 17:16:52.142364       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1019 17:16:52.142337       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1019 17:16:52.143445       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1019 17:16:52.152343       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 17:16:52.157465       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 17:16:52.159227       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1019 17:16:52.164471       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1019 17:16:52.177426       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 17:16:52.192936       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 17:16:52.193058       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1019 17:16:52.193093       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [cf5b36f1b400873a6f64ccde1cbf959c0adf80a4cbee8a27050d0adb93e938aa] <==
	I1019 17:16:49.664926       1 server_linux.go:53] "Using iptables proxy"
	I1019 17:16:49.723283       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 17:16:49.823951       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 17:16:49.824111       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1019 17:16:49.824242       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 17:16:49.849019       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 17:16:49.849120       1 server_linux.go:132] "Using iptables Proxier"
	I1019 17:16:49.855743       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 17:16:49.856323       1 server.go:527] "Version info" version="v1.34.1"
	I1019 17:16:49.856455       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:16:49.859457       1 config.go:200] "Starting service config controller"
	I1019 17:16:49.859481       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 17:16:49.859502       1 config.go:106] "Starting endpoint slice config controller"
	I1019 17:16:49.859507       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 17:16:49.859523       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 17:16:49.859528       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 17:16:49.859923       1 config.go:309] "Starting node config controller"
	I1019 17:16:49.859961       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 17:16:49.959756       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 17:16:49.959769       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 17:16:49.959773       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1019 17:16:49.960319       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [0198767b0edb6f90348a6cb47c20f3c0c5d712ddfcdc06a79eb89a2396dc856b] <==
	I1019 17:16:47.210525       1 serving.go:386] Generated self-signed cert in-memory
	W1019 17:16:48.752862       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1019 17:16:48.753147       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1019 17:16:48.753165       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1019 17:16:48.753176       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1019 17:16:48.813728       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1019 17:16:48.813758       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:16:48.816801       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 17:16:48.816846       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 17:16:48.818811       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1019 17:16:48.818906       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1019 17:16:48.917270       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 19 17:16:52 default-k8s-diff-port-663015 kubelet[711]: I1019 17:16:52.874565     711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/caaa2764-cc2e-4a6c-a8b3-45bb63d04684-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-kr5fp\" (UID: \"caaa2764-cc2e-4a6c-a8b3-45bb63d04684\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kr5fp"
	Oct 19 17:16:52 default-k8s-diff-port-663015 kubelet[711]: I1019 17:16:52.874625     711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znlws\" (UniqueName: \"kubernetes.io/projected/3e29db7a-65b5-4974-b566-184d80eaa717-kube-api-access-znlws\") pod \"dashboard-metrics-scraper-6ffb444bf9-wz2k5\" (UID: \"3e29db7a-65b5-4974-b566-184d80eaa717\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wz2k5"
	Oct 19 17:16:52 default-k8s-diff-port-663015 kubelet[711]: I1019 17:16:52.874643     711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/3e29db7a-65b5-4974-b566-184d80eaa717-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-wz2k5\" (UID: \"3e29db7a-65b5-4974-b566-184d80eaa717\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wz2k5"
	Oct 19 17:16:52 default-k8s-diff-port-663015 kubelet[711]: I1019 17:16:52.874665     711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9lhv\" (UniqueName: \"kubernetes.io/projected/caaa2764-cc2e-4a6c-a8b3-45bb63d04684-kube-api-access-h9lhv\") pod \"kubernetes-dashboard-855c9754f9-kr5fp\" (UID: \"caaa2764-cc2e-4a6c-a8b3-45bb63d04684\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kr5fp"
	Oct 19 17:16:55 default-k8s-diff-port-663015 kubelet[711]: I1019 17:16:55.582962     711 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 19 17:17:01 default-k8s-diff-port-663015 kubelet[711]: I1019 17:17:01.590009     711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kr5fp" podStartSLOduration=3.786112179 podStartE2EDuration="9.589986972s" podCreationTimestamp="2025-10-19 17:16:52 +0000 UTC" firstStartedPulling="2025-10-19 17:16:53.110184825 +0000 UTC m=+6.976881850" lastFinishedPulling="2025-10-19 17:16:58.914059613 +0000 UTC m=+12.780756643" observedRunningTime="2025-10-19 17:17:00.330286018 +0000 UTC m=+14.196983073" watchObservedRunningTime="2025-10-19 17:17:01.589986972 +0000 UTC m=+15.456684009"
	Oct 19 17:17:03 default-k8s-diff-port-663015 kubelet[711]: I1019 17:17:03.322761     711 scope.go:117] "RemoveContainer" containerID="80c72d2cb2d1b3ff54ddd4ed079aa6d16f2bebd0cb99d9dd7464a60d06e79a77"
	Oct 19 17:17:04 default-k8s-diff-port-663015 kubelet[711]: I1019 17:17:04.328060     711 scope.go:117] "RemoveContainer" containerID="80c72d2cb2d1b3ff54ddd4ed079aa6d16f2bebd0cb99d9dd7464a60d06e79a77"
	Oct 19 17:17:04 default-k8s-diff-port-663015 kubelet[711]: I1019 17:17:04.328250     711 scope.go:117] "RemoveContainer" containerID="18a1535c659f9b363f7d88e14a775d4db65c71f30fad2d40f5abac26ce73a81e"
	Oct 19 17:17:04 default-k8s-diff-port-663015 kubelet[711]: E1019 17:17:04.328467     711 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wz2k5_kubernetes-dashboard(3e29db7a-65b5-4974-b566-184d80eaa717)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wz2k5" podUID="3e29db7a-65b5-4974-b566-184d80eaa717"
	Oct 19 17:17:05 default-k8s-diff-port-663015 kubelet[711]: I1019 17:17:05.332896     711 scope.go:117] "RemoveContainer" containerID="18a1535c659f9b363f7d88e14a775d4db65c71f30fad2d40f5abac26ce73a81e"
	Oct 19 17:17:05 default-k8s-diff-port-663015 kubelet[711]: E1019 17:17:05.333147     711 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wz2k5_kubernetes-dashboard(3e29db7a-65b5-4974-b566-184d80eaa717)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wz2k5" podUID="3e29db7a-65b5-4974-b566-184d80eaa717"
	Oct 19 17:17:12 default-k8s-diff-port-663015 kubelet[711]: I1019 17:17:12.294763     711 scope.go:117] "RemoveContainer" containerID="18a1535c659f9b363f7d88e14a775d4db65c71f30fad2d40f5abac26ce73a81e"
	Oct 19 17:17:12 default-k8s-diff-port-663015 kubelet[711]: E1019 17:17:12.294974     711 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wz2k5_kubernetes-dashboard(3e29db7a-65b5-4974-b566-184d80eaa717)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wz2k5" podUID="3e29db7a-65b5-4974-b566-184d80eaa717"
	Oct 19 17:17:20 default-k8s-diff-port-663015 kubelet[711]: I1019 17:17:20.374408     711 scope.go:117] "RemoveContainer" containerID="90c70b291b7ddbd6ff065c1772c5f6c1c6e80cc77afb11310f43bb3d05243b25"
	Oct 19 17:17:25 default-k8s-diff-port-663015 kubelet[711]: I1019 17:17:25.225030     711 scope.go:117] "RemoveContainer" containerID="18a1535c659f9b363f7d88e14a775d4db65c71f30fad2d40f5abac26ce73a81e"
	Oct 19 17:17:25 default-k8s-diff-port-663015 kubelet[711]: I1019 17:17:25.392707     711 scope.go:117] "RemoveContainer" containerID="18a1535c659f9b363f7d88e14a775d4db65c71f30fad2d40f5abac26ce73a81e"
	Oct 19 17:17:25 default-k8s-diff-port-663015 kubelet[711]: I1019 17:17:25.393187     711 scope.go:117] "RemoveContainer" containerID="90266bf26f9f357afcc2eaa3c72132271f6bad2d3b47118e66f773e0407d9502"
	Oct 19 17:17:25 default-k8s-diff-port-663015 kubelet[711]: E1019 17:17:25.393868     711 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wz2k5_kubernetes-dashboard(3e29db7a-65b5-4974-b566-184d80eaa717)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wz2k5" podUID="3e29db7a-65b5-4974-b566-184d80eaa717"
	Oct 19 17:17:32 default-k8s-diff-port-663015 kubelet[711]: I1019 17:17:32.295388     711 scope.go:117] "RemoveContainer" containerID="90266bf26f9f357afcc2eaa3c72132271f6bad2d3b47118e66f773e0407d9502"
	Oct 19 17:17:32 default-k8s-diff-port-663015 kubelet[711]: E1019 17:17:32.295595     711 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wz2k5_kubernetes-dashboard(3e29db7a-65b5-4974-b566-184d80eaa717)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wz2k5" podUID="3e29db7a-65b5-4974-b566-184d80eaa717"
	Oct 19 17:17:39 default-k8s-diff-port-663015 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 19 17:17:39 default-k8s-diff-port-663015 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 19 17:17:39 default-k8s-diff-port-663015 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 19 17:17:39 default-k8s-diff-port-663015 systemd[1]: kubelet.service: Consumed 1.788s CPU time.
	
	
	==> kubernetes-dashboard [955e744fa1a4514fad91045a63abd0edc7a8e64dcf7069fcb10271b34fac88fe] <==
	2025/10/19 17:16:59 Starting overwatch
	2025/10/19 17:16:59 Using namespace: kubernetes-dashboard
	2025/10/19 17:16:59 Using in-cluster config to connect to apiserver
	2025/10/19 17:16:59 Using secret token for csrf signing
	2025/10/19 17:16:59 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/19 17:16:59 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/19 17:16:59 Successful initial request to the apiserver, version: v1.34.1
	2025/10/19 17:16:59 Generating JWE encryption key
	2025/10/19 17:16:59 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/19 17:16:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/19 17:16:59 Initializing JWE encryption key from synchronized object
	2025/10/19 17:16:59 Creating in-cluster Sidecar client
	2025/10/19 17:16:59 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/19 17:16:59 Serving insecurely on HTTP port: 9090
	2025/10/19 17:17:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [4bc7eb843c66297e1cde0c1c9ec4523bf5b08e853c04e2abf91e040c7011df9d] <==
	I1019 17:17:20.431465       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1019 17:17:20.439728       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1019 17:17:20.439781       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1019 17:17:20.442212       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:17:23.897490       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:17:28.158592       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:17:31.757631       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:17:34.811513       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:17:37.834285       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:17:37.838814       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 17:17:37.838958       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1019 17:17:37.839107       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e5184909-d0d7-4566-badd-0d775b85f21e", APIVersion:"v1", ResourceVersion:"624", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-663015_4652e047-0cee-40bb-8deb-0e34af4c79ff became leader
	I1019 17:17:37.839156       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-663015_4652e047-0cee-40bb-8deb-0e34af4c79ff!
	W1019 17:17:37.842036       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:17:37.845372       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 17:17:37.939384       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-663015_4652e047-0cee-40bb-8deb-0e34af4c79ff!
	W1019 17:17:39.848712       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:17:39.853059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:17:41.856539       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:17:41.860989       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [90c70b291b7ddbd6ff065c1772c5f6c1c6e80cc77afb11310f43bb3d05243b25] <==
	I1019 17:16:49.622757       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1019 17:17:19.625609       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-663015 -n default-k8s-diff-port-663015
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-663015 -n default-k8s-diff-port-663015: exit status 2 (349.807101ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-663015 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-663015
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-663015:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8abacb4fd440310264bf5826aa4e098ead2c9c051555dedfb3490d558304cfa1",
	        "Created": "2025-10-19T17:15:37.665155013Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 274829,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T17:16:38.526270344Z",
	            "FinishedAt": "2025-10-19T17:16:37.654048946Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/8abacb4fd440310264bf5826aa4e098ead2c9c051555dedfb3490d558304cfa1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8abacb4fd440310264bf5826aa4e098ead2c9c051555dedfb3490d558304cfa1/hostname",
	        "HostsPath": "/var/lib/docker/containers/8abacb4fd440310264bf5826aa4e098ead2c9c051555dedfb3490d558304cfa1/hosts",
	        "LogPath": "/var/lib/docker/containers/8abacb4fd440310264bf5826aa4e098ead2c9c051555dedfb3490d558304cfa1/8abacb4fd440310264bf5826aa4e098ead2c9c051555dedfb3490d558304cfa1-json.log",
	        "Name": "/default-k8s-diff-port-663015",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-663015:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-663015",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8abacb4fd440310264bf5826aa4e098ead2c9c051555dedfb3490d558304cfa1",
	                "LowerDir": "/var/lib/docker/overlay2/c0ced4b65fd57ff4829b7f08104e8b5cd0e9cd252b29c14f2eeaa24cc6489ede-init/diff:/var/lib/docker/overlay2/0516a6de822f96a429e000bb6523437ca93a66a6aecaba561c6abf45d0d1defe/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c0ced4b65fd57ff4829b7f08104e8b5cd0e9cd252b29c14f2eeaa24cc6489ede/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c0ced4b65fd57ff4829b7f08104e8b5cd0e9cd252b29c14f2eeaa24cc6489ede/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c0ced4b65fd57ff4829b7f08104e8b5cd0e9cd252b29c14f2eeaa24cc6489ede/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-663015",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-663015/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-663015",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-663015",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-663015",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ed3f81f658cbe16f536c1ac747442ea27efab1429f3c5fcfd91d96e16704b896",
	            "SandboxKey": "/var/run/docker/netns/ed3f81f658cb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-663015": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "82:fb:99:d9:0d:af",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "11e31399831af0dccd7c897515d1d7c4e22e31f4e5da333490f417dfbabfda44",
	                    "EndpointID": "c99cd5f30981722c9a472b4b321225fdbfa23a3fc45505513f2c2cf11450bd38",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-663015",
	                        "8abacb4fd440"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-663015 -n default-k8s-diff-port-663015
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-663015 -n default-k8s-diff-port-663015: exit status 2 (336.046231ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-663015 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-663015 logs -n 25: (1.212473062s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p embed-certs-090139 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-090139           │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:16 UTC │
	│ start   │ -p embed-certs-090139 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-090139           │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:17 UTC │
	│ addons  │ enable metrics-server -p newest-cni-848035 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-848035            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ stop    │ -p newest-cni-848035 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-848035            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:16 UTC │
	│ addons  │ enable dashboard -p newest-cni-848035 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-848035            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:16 UTC │
	│ start   │ -p newest-cni-848035 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-848035            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:16 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-663015 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-663015 │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:16 UTC │
	│ start   │ -p default-k8s-diff-port-663015 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-663015 │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:17 UTC │
	│ image   │ newest-cni-848035 image list --format=json                                                                                                                                                                                                    │ newest-cni-848035            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:16 UTC │
	│ pause   │ -p newest-cni-848035 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-848035            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ delete  │ -p newest-cni-848035                                                                                                                                                                                                                          │ newest-cni-848035            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:16 UTC │
	│ delete  │ -p newest-cni-848035                                                                                                                                                                                                                          │ newest-cni-848035            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:16 UTC │
	│ start   │ -p auto-624324 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-624324                  │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:17 UTC │
	│ start   │ -p kubernetes-upgrade-318879 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-318879    │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ start   │ -p kubernetes-upgrade-318879 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-318879    │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:17 UTC │
	│ delete  │ -p kubernetes-upgrade-318879                                                                                                                                                                                                                  │ kubernetes-upgrade-318879    │ jenkins │ v1.37.0 │ 19 Oct 25 17:17 UTC │ 19 Oct 25 17:17 UTC │
	│ start   │ -p kindnet-624324 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio                                                                                                      │ kindnet-624324               │ jenkins │ v1.37.0 │ 19 Oct 25 17:17 UTC │                     │
	│ image   │ embed-certs-090139 image list --format=json                                                                                                                                                                                                   │ embed-certs-090139           │ jenkins │ v1.37.0 │ 19 Oct 25 17:17 UTC │ 19 Oct 25 17:17 UTC │
	│ pause   │ -p embed-certs-090139 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-090139           │ jenkins │ v1.37.0 │ 19 Oct 25 17:17 UTC │                     │
	│ delete  │ -p embed-certs-090139                                                                                                                                                                                                                         │ embed-certs-090139           │ jenkins │ v1.37.0 │ 19 Oct 25 17:17 UTC │ 19 Oct 25 17:17 UTC │
	│ delete  │ -p embed-certs-090139                                                                                                                                                                                                                         │ embed-certs-090139           │ jenkins │ v1.37.0 │ 19 Oct 25 17:17 UTC │ 19 Oct 25 17:17 UTC │
	│ start   │ -p calico-624324 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio                                                                                                        │ calico-624324                │ jenkins │ v1.37.0 │ 19 Oct 25 17:17 UTC │                     │
	│ ssh     │ -p auto-624324 pgrep -a kubelet                                                                                                                                                                                                               │ auto-624324                  │ jenkins │ v1.37.0 │ 19 Oct 25 17:17 UTC │ 19 Oct 25 17:17 UTC │
	│ image   │ default-k8s-diff-port-663015 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-663015 │ jenkins │ v1.37.0 │ 19 Oct 25 17:17 UTC │ 19 Oct 25 17:17 UTC │
	│ pause   │ -p default-k8s-diff-port-663015 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-663015 │ jenkins │ v1.37.0 │ 19 Oct 25 17:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 17:17:30
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 17:17:30.185260  289639 out.go:360] Setting OutFile to fd 1 ...
	I1019 17:17:30.185541  289639 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:17:30.185551  289639 out.go:374] Setting ErrFile to fd 2...
	I1019 17:17:30.185557  289639 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:17:30.185792  289639 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3731/.minikube/bin
	I1019 17:17:30.186331  289639 out.go:368] Setting JSON to false
	I1019 17:17:30.187545  289639 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3596,"bootTime":1760890654,"procs":341,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 17:17:30.187642  289639 start.go:143] virtualization: kvm guest
	I1019 17:17:30.189871  289639 out.go:179] * [calico-624324] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 17:17:30.191302  289639 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 17:17:30.191334  289639 notify.go:221] Checking for updates...
	I1019 17:17:30.194160  289639 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 17:17:30.195367  289639 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-3731/kubeconfig
	I1019 17:17:30.196824  289639 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-3731/.minikube
	I1019 17:17:30.197996  289639 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 17:17:30.199151  289639 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 17:17:30.200539  284195 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1019 17:17:30.200620  284195 kubeadm.go:319] [preflight] Running pre-flight checks
	I1019 17:17:30.200740  284195 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1019 17:17:30.200815  284195 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1041-gcp
	I1019 17:17:30.200888  284195 kubeadm.go:319] OS: Linux
	I1019 17:17:30.200991  284195 kubeadm.go:319] CGROUPS_CPU: enabled
	I1019 17:17:30.201097  284195 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1019 17:17:30.201179  284195 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1019 17:17:30.201247  284195 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1019 17:17:30.201426  284195 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1019 17:17:30.201499  284195 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1019 17:17:30.201563  284195 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1019 17:17:30.201632  284195 kubeadm.go:319] CGROUPS_IO: enabled
	I1019 17:17:30.201738  284195 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1019 17:17:30.201891  284195 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1019 17:17:30.202039  284195 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1019 17:17:30.202150  284195 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1019 17:17:30.203504  284195 out.go:252]   - Generating certificates and keys ...
	I1019 17:17:30.203597  284195 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1019 17:17:30.203710  284195 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1019 17:17:30.203830  284195 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1019 17:17:30.203918  284195 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1019 17:17:30.204036  284195 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1019 17:17:30.204153  284195 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1019 17:17:30.204235  284195 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1019 17:17:30.204375  284195 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kindnet-624324 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1019 17:17:30.204473  284195 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1019 17:17:30.204649  284195 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kindnet-624324 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1019 17:17:30.204713  284195 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1019 17:17:30.204766  284195 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1019 17:17:30.204803  284195 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1019 17:17:30.204877  284195 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1019 17:17:30.204963  284195 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1019 17:17:30.205050  284195 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1019 17:17:30.205155  284195 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1019 17:17:30.205236  284195 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1019 17:17:30.205287  284195 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1019 17:17:30.205379  284195 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1019 17:17:30.205454  284195 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1019 17:17:30.206741  284195 out.go:252]   - Booting up control plane ...
	I1019 17:17:30.206821  284195 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1019 17:17:30.206886  284195 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1019 17:17:30.206953  284195 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1019 17:17:30.207085  284195 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1019 17:17:30.207209  284195 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1019 17:17:30.207368  284195 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1019 17:17:30.207493  284195 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1019 17:17:30.207559  284195 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1019 17:17:30.207737  284195 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1019 17:17:30.207902  284195 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1019 17:17:30.207985  284195 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001588575s
	I1019 17:17:30.208147  284195 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1019 17:17:30.208269  284195 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1019 17:17:30.208401  284195 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1019 17:17:30.208471  284195 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1019 17:17:30.208567  284195 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.88154165s
	I1019 17:17:30.208662  284195 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.48464111s
	I1019 17:17:30.208776  284195 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.502087654s
	I1019 17:17:30.208893  284195 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1019 17:17:30.209059  284195 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1019 17:17:30.209144  284195 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1019 17:17:30.209355  284195 kubeadm.go:319] [mark-control-plane] Marking the node kindnet-624324 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1019 17:17:30.209434  284195 kubeadm.go:319] [bootstrap-token] Using token: 6l0bh9.d4pxjapp0nmt5wyg
	I1019 17:17:30.210995  284195 out.go:252]   - Configuring RBAC rules ...
	I1019 17:17:30.211179  284195 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1019 17:17:30.211302  284195 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1019 17:17:30.211491  284195 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1019 17:17:30.211676  284195 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1019 17:17:30.211844  284195 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1019 17:17:30.211981  284195 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1019 17:17:30.212174  284195 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1019 17:17:30.212236  284195 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1019 17:17:30.212305  284195 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1019 17:17:30.212314  284195 kubeadm.go:319] 
	I1019 17:17:30.212425  284195 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1019 17:17:30.212440  284195 kubeadm.go:319] 
	I1019 17:17:30.212553  284195 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1019 17:17:30.212562  284195 kubeadm.go:319] 
	I1019 17:17:30.212595  284195 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1019 17:17:30.212647  284195 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1019 17:17:30.212689  284195 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1019 17:17:30.212694  284195 kubeadm.go:319] 
	I1019 17:17:30.212736  284195 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1019 17:17:30.212744  284195 kubeadm.go:319] 
	I1019 17:17:30.212789  284195 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1019 17:17:30.212795  284195 kubeadm.go:319] 
	I1019 17:17:30.212846  284195 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1019 17:17:30.212936  284195 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1019 17:17:30.213014  284195 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1019 17:17:30.213022  284195 kubeadm.go:319] 
	I1019 17:17:30.213152  284195 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1019 17:17:30.213264  284195 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1019 17:17:30.213274  284195 kubeadm.go:319] 
	I1019 17:17:30.213424  284195 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 6l0bh9.d4pxjapp0nmt5wyg \
	I1019 17:17:30.213561  284195 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:861d72ec119aad1188a44160bac431ea1f9fef8d67df56e962f86d9bbf64f1d3 \
	I1019 17:17:30.213591  284195 kubeadm.go:319] 	--control-plane 
	I1019 17:17:30.213601  284195 kubeadm.go:319] 
	I1019 17:17:30.213710  284195 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1019 17:17:30.213720  284195 kubeadm.go:319] 
	I1019 17:17:30.213792  284195 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 6l0bh9.d4pxjapp0nmt5wyg \
	I1019 17:17:30.213915  284195 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:861d72ec119aad1188a44160bac431ea1f9fef8d67df56e962f86d9bbf64f1d3 
	I1019 17:17:30.213931  284195 cni.go:84] Creating CNI manager for "kindnet"
	I1019 17:17:30.215435  284195 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1019 17:17:30.200858  289639 config.go:182] Loaded profile config "auto-624324": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:17:30.201032  289639 config.go:182] Loaded profile config "default-k8s-diff-port-663015": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:17:30.201190  289639 config.go:182] Loaded profile config "kindnet-624324": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:17:30.201310  289639 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 17:17:30.227986  289639 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1019 17:17:30.228102  289639 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:17:30.294676  289639 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-19 17:17:30.282465244 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 17:17:30.294835  289639 docker.go:319] overlay module found
	I1019 17:17:30.296981  289639 out.go:179] * Using the docker driver based on user configuration
	I1019 17:17:30.298470  289639 start.go:309] selected driver: docker
	I1019 17:17:30.298482  289639 start.go:930] validating driver "docker" against <nil>
	I1019 17:17:30.298493  289639 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 17:17:30.299116  289639 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:17:30.365861  289639 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-19 17:17:30.35470659 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 17:17:30.366124  289639 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1019 17:17:30.366384  289639 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 17:17:30.369571  289639 out.go:179] * Using Docker driver with root privileges
	I1019 17:17:30.371091  289639 cni.go:84] Creating CNI manager for "calico"
	I1019 17:17:30.371118  289639 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I1019 17:17:30.371208  289639 start.go:353] cluster config:
	{Name:calico-624324 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-624324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:17:30.372839  289639 out.go:179] * Starting "calico-624324" primary control-plane node in "calico-624324" cluster
	I1019 17:17:30.374294  289639 cache.go:124] Beginning downloading kic base image for docker with crio
	I1019 17:17:30.376419  289639 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 17:17:30.378319  289639 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:17:30.378374  289639 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-3731/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1019 17:17:30.378385  289639 cache.go:59] Caching tarball of preloaded images
	I1019 17:17:30.378420  289639 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 17:17:30.378507  289639 preload.go:233] Found /home/jenkins/minikube-integration/21683-3731/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1019 17:17:30.378527  289639 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 17:17:30.378631  289639 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/calico-624324/config.json ...
	I1019 17:17:30.378658  289639 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/calico-624324/config.json: {Name:mkc1d18576fa2e902d7f1848da48391372f0709f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:17:30.402467  289639 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 17:17:30.402487  289639 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 17:17:30.402504  289639 cache.go:233] Successfully downloaded all kic artifacts
	I1019 17:17:30.402534  289639 start.go:360] acquireMachinesLock for calico-624324: {Name:mk2c98cc9b235a303919b952cb56e2eb1222327c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 17:17:30.402654  289639 start.go:364] duration metric: took 100.062µs to acquireMachinesLock for "calico-624324"
	I1019 17:17:30.402688  289639 start.go:93] Provisioning new machine with config: &{Name:calico-624324 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-624324 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 17:17:30.402774  289639 start.go:125] createHost starting for "" (driver="docker")
	W1019 17:17:27.947909  279986 node_ready.go:57] node "auto-624324" has "Ready":"False" status (will retry)
	W1019 17:17:29.948453  279986 node_ready.go:57] node "auto-624324" has "Ready":"False" status (will retry)
	W1019 17:17:32.448468  279986 node_ready.go:57] node "auto-624324" has "Ready":"False" status (will retry)
	I1019 17:17:30.216943  284195 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1019 17:17:30.222173  284195 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1019 17:17:30.222193  284195 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1019 17:17:30.237562  284195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1019 17:17:30.490427  284195 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1019 17:17:30.490546  284195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:17:30.490597  284195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-624324 minikube.k8s.io/updated_at=2025_10_19T17_17_30_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34 minikube.k8s.io/name=kindnet-624324 minikube.k8s.io/primary=true
	I1019 17:17:30.590937  284195 ops.go:34] apiserver oom_adj: -16
	I1019 17:17:30.591031  284195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:17:31.091534  284195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:17:31.592110  284195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:17:32.091793  284195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:17:32.591892  284195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:17:30.408239  289639 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1019 17:17:30.408537  289639 start.go:159] libmachine.API.Create for "calico-624324" (driver="docker")
	I1019 17:17:30.408578  289639 client.go:171] LocalClient.Create starting
	I1019 17:17:30.408655  289639 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem
	I1019 17:17:30.408711  289639 main.go:143] libmachine: Decoding PEM data...
	I1019 17:17:30.408745  289639 main.go:143] libmachine: Parsing certificate...
	I1019 17:17:30.408833  289639 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-3731/.minikube/certs/cert.pem
	I1019 17:17:30.408867  289639 main.go:143] libmachine: Decoding PEM data...
	I1019 17:17:30.408883  289639 main.go:143] libmachine: Parsing certificate...
	I1019 17:17:30.409393  289639 cli_runner.go:164] Run: docker network inspect calico-624324 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1019 17:17:30.430134  289639 cli_runner.go:211] docker network inspect calico-624324 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1019 17:17:30.430246  289639 network_create.go:284] running [docker network inspect calico-624324] to gather additional debugging logs...
	I1019 17:17:30.430269  289639 cli_runner.go:164] Run: docker network inspect calico-624324
	W1019 17:17:30.451573  289639 cli_runner.go:211] docker network inspect calico-624324 returned with exit code 1
	I1019 17:17:30.451634  289639 network_create.go:287] error running [docker network inspect calico-624324]: docker network inspect calico-624324: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-624324 not found
	I1019 17:17:30.451653  289639 network_create.go:289] output of [docker network inspect calico-624324]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-624324 not found
	
	** /stderr **
	I1019 17:17:30.451866  289639 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 17:17:30.472830  289639 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-96cf7041f267 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:7a:ea:91:e3:37:25} reservation:<nil>}
	I1019 17:17:30.473906  289639 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-0f2c415cfca9 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:96:f0:8a:e9:5f:de} reservation:<nil>}
	I1019 17:17:30.474899  289639 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ca739aebb768 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:a6:81:0d:b3:5e:ec} reservation:<nil>}
	I1019 17:17:30.475677  289639 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-a9c8e7e3ba20 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:3e:77:c0:aa:7f:5e} reservation:<nil>}
	I1019 17:17:30.476341  289639 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-11e31399831a IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:62:85:d0:14:cb:57} reservation:<nil>}
	I1019 17:17:30.477003  289639 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-a3eeeb5b1108 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:06:da:04:df:0e:fc} reservation:<nil>}
	I1019 17:17:30.477817  289639 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f64ce0}
	I1019 17:17:30.477842  289639 network_create.go:124] attempt to create docker network calico-624324 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1019 17:17:30.477889  289639 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-624324 calico-624324
	I1019 17:17:30.555614  289639 network_create.go:108] docker network calico-624324 192.168.103.0/24 created
	I1019 17:17:30.555651  289639 kic.go:121] calculated static IP "192.168.103.2" for the "calico-624324" container
	I1019 17:17:30.555809  289639 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1019 17:17:30.577790  289639 cli_runner.go:164] Run: docker volume create calico-624324 --label name.minikube.sigs.k8s.io=calico-624324 --label created_by.minikube.sigs.k8s.io=true
	I1019 17:17:30.601230  289639 oci.go:103] Successfully created a docker volume calico-624324
	I1019 17:17:30.601299  289639 cli_runner.go:164] Run: docker run --rm --name calico-624324-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-624324 --entrypoint /usr/bin/test -v calico-624324:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1019 17:17:31.008663  289639 oci.go:107] Successfully prepared a docker volume calico-624324
	I1019 17:17:31.008716  289639 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:17:31.008741  289639 kic.go:194] Starting extracting preloaded images to volume ...
	I1019 17:17:31.008790  289639 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-3731/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-624324:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1019 17:17:33.091606  284195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:17:33.591188  284195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:17:34.091214  284195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:17:34.592036  284195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:17:35.091193  284195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:17:35.591965  284195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:17:35.688361  284195 kubeadm.go:1114] duration metric: took 5.197878626s to wait for elevateKubeSystemPrivileges
	I1019 17:17:35.688391  284195 kubeadm.go:403] duration metric: took 16.642563618s to StartCluster
	I1019 17:17:35.688408  284195 settings.go:142] acquiring lock: {Name:mk205bd8d663b4a1850184d0f589700c0d2429b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:17:35.688469  284195 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-3731/kubeconfig
	I1019 17:17:35.689712  284195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/kubeconfig: {Name:mk4cdfafbafeae33568865a60cf929c656d55c5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:17:35.689929  284195 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1019 17:17:35.689952  284195 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 17:17:35.689925  284195 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 17:17:35.690044  284195 addons.go:70] Setting storage-provisioner=true in profile "kindnet-624324"
	I1019 17:17:35.690060  284195 addons.go:239] Setting addon storage-provisioner=true in "kindnet-624324"
	I1019 17:17:35.690102  284195 host.go:66] Checking if "kindnet-624324" exists ...
	I1019 17:17:35.690110  284195 addons.go:70] Setting default-storageclass=true in profile "kindnet-624324"
	I1019 17:17:35.690129  284195 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kindnet-624324"
	I1019 17:17:35.690143  284195 config.go:182] Loaded profile config "kindnet-624324": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:17:35.690450  284195 cli_runner.go:164] Run: docker container inspect kindnet-624324 --format={{.State.Status}}
	I1019 17:17:35.690583  284195 cli_runner.go:164] Run: docker container inspect kindnet-624324 --format={{.State.Status}}
	I1019 17:17:35.691861  284195 out.go:179] * Verifying Kubernetes components...
	I1019 17:17:35.695143  284195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:17:35.720116  284195 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 17:17:35.721460  284195 addons.go:239] Setting addon default-storageclass=true in "kindnet-624324"
	I1019 17:17:35.721498  284195 host.go:66] Checking if "kindnet-624324" exists ...
	I1019 17:17:35.721743  284195 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:17:35.721783  284195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 17:17:35.721842  284195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-624324
	I1019 17:17:35.721973  284195 cli_runner.go:164] Run: docker container inspect kindnet-624324 --format={{.State.Status}}
	I1019 17:17:35.755031  284195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/kindnet-624324/id_rsa Username:docker}
	I1019 17:17:35.756465  284195 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 17:17:35.756504  284195 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 17:17:35.756709  284195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-624324
	I1019 17:17:35.784975  284195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/kindnet-624324/id_rsa Username:docker}
	I1019 17:17:35.816556  284195 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1019 17:17:35.872892  284195 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:17:35.915310  284195 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:17:35.933958  284195 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 17:17:36.050912  284195 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1019 17:17:36.054196  284195 node_ready.go:35] waiting up to 15m0s for node "kindnet-624324" to be "Ready" ...
	I1019 17:17:36.319840  284195 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1019 17:17:34.537344  279986 node_ready.go:57] node "auto-624324" has "Ready":"False" status (will retry)
	I1019 17:17:35.084309  279986 node_ready.go:49] node "auto-624324" is "Ready"
	I1019 17:17:35.084388  279986 node_ready.go:38] duration metric: took 11.139734674s for node "auto-624324" to be "Ready" ...
	I1019 17:17:35.084409  279986 api_server.go:52] waiting for apiserver process to appear ...
	I1019 17:17:35.084476  279986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 17:17:35.101369  279986 api_server.go:72] duration metric: took 11.73260785s to wait for apiserver process to appear ...
	I1019 17:17:35.101391  279986 api_server.go:88] waiting for apiserver healthz status ...
	I1019 17:17:35.101413  279986 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1019 17:17:35.106137  279986 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1019 17:17:35.107271  279986 api_server.go:141] control plane version: v1.34.1
	I1019 17:17:35.107294  279986 api_server.go:131] duration metric: took 5.897803ms to wait for apiserver health ...
	I1019 17:17:35.107304  279986 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 17:17:35.195018  279986 system_pods.go:59] 8 kube-system pods found
	I1019 17:17:35.195089  279986 system_pods.go:61] "coredns-66bc5c9577-5mktl" [86e6103e-b259-44eb-bda7-608ba13635ea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:17:35.195110  279986 system_pods.go:61] "etcd-auto-624324" [d10579f9-f659-4ffd-b07f-0bccb7764993] Running
	I1019 17:17:35.195121  279986 system_pods.go:61] "kindnet-sn8ll" [89a2958a-6f16-45e7-95a8-c808138daf21] Running
	I1019 17:17:35.195133  279986 system_pods.go:61] "kube-apiserver-auto-624324" [8c7a6aad-aa83-4c53-8d67-eec9eb82ed6a] Running
	I1019 17:17:35.195142  279986 system_pods.go:61] "kube-controller-manager-auto-624324" [0fd236da-56f0-4df1-8bc4-820380e4d3d2] Running
	I1019 17:17:35.195148  279986 system_pods.go:61] "kube-proxy-84x4j" [038b0ec3-1c9b-4773-b315-7e649f429afb] Running
	I1019 17:17:35.195156  279986 system_pods.go:61] "kube-scheduler-auto-624324" [a8e7de84-30d1-4e87-b5ad-e36e36b56c20] Running
	I1019 17:17:35.195164  279986 system_pods.go:61] "storage-provisioner" [23f97584-2cde-4e7e-90fd-b78f5809de66] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 17:17:35.195179  279986 system_pods.go:74] duration metric: took 87.868832ms to wait for pod list to return data ...
	I1019 17:17:35.195199  279986 default_sa.go:34] waiting for default service account to be created ...
	I1019 17:17:35.199139  279986 default_sa.go:45] found service account: "default"
	I1019 17:17:35.199168  279986 default_sa.go:55] duration metric: took 3.962279ms for default service account to be created ...
	I1019 17:17:35.199181  279986 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 17:17:35.202043  279986 system_pods.go:86] 8 kube-system pods found
	I1019 17:17:35.202090  279986 system_pods.go:89] "coredns-66bc5c9577-5mktl" [86e6103e-b259-44eb-bda7-608ba13635ea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:17:35.202099  279986 system_pods.go:89] "etcd-auto-624324" [d10579f9-f659-4ffd-b07f-0bccb7764993] Running
	I1019 17:17:35.202107  279986 system_pods.go:89] "kindnet-sn8ll" [89a2958a-6f16-45e7-95a8-c808138daf21] Running
	I1019 17:17:35.202112  279986 system_pods.go:89] "kube-apiserver-auto-624324" [8c7a6aad-aa83-4c53-8d67-eec9eb82ed6a] Running
	I1019 17:17:35.202118  279986 system_pods.go:89] "kube-controller-manager-auto-624324" [0fd236da-56f0-4df1-8bc4-820380e4d3d2] Running
	I1019 17:17:35.202123  279986 system_pods.go:89] "kube-proxy-84x4j" [038b0ec3-1c9b-4773-b315-7e649f429afb] Running
	I1019 17:17:35.202128  279986 system_pods.go:89] "kube-scheduler-auto-624324" [a8e7de84-30d1-4e87-b5ad-e36e36b56c20] Running
	I1019 17:17:35.202136  279986 system_pods.go:89] "storage-provisioner" [23f97584-2cde-4e7e-90fd-b78f5809de66] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 17:17:35.202162  279986 retry.go:31] will retry after 285.532284ms: missing components: kube-dns
	I1019 17:17:35.498758  279986 system_pods.go:86] 8 kube-system pods found
	I1019 17:17:35.498811  279986 system_pods.go:89] "coredns-66bc5c9577-5mktl" [86e6103e-b259-44eb-bda7-608ba13635ea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:17:35.498819  279986 system_pods.go:89] "etcd-auto-624324" [d10579f9-f659-4ffd-b07f-0bccb7764993] Running
	I1019 17:17:35.498827  279986 system_pods.go:89] "kindnet-sn8ll" [89a2958a-6f16-45e7-95a8-c808138daf21] Running
	I1019 17:17:35.498832  279986 system_pods.go:89] "kube-apiserver-auto-624324" [8c7a6aad-aa83-4c53-8d67-eec9eb82ed6a] Running
	I1019 17:17:35.498839  279986 system_pods.go:89] "kube-controller-manager-auto-624324" [0fd236da-56f0-4df1-8bc4-820380e4d3d2] Running
	I1019 17:17:35.498844  279986 system_pods.go:89] "kube-proxy-84x4j" [038b0ec3-1c9b-4773-b315-7e649f429afb] Running
	I1019 17:17:35.498849  279986 system_pods.go:89] "kube-scheduler-auto-624324" [a8e7de84-30d1-4e87-b5ad-e36e36b56c20] Running
	I1019 17:17:35.498856  279986 system_pods.go:89] "storage-provisioner" [23f97584-2cde-4e7e-90fd-b78f5809de66] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 17:17:35.498874  279986 retry.go:31] will retry after 362.776811ms: missing components: kube-dns
	I1019 17:17:35.867039  279986 system_pods.go:86] 8 kube-system pods found
	I1019 17:17:35.867188  279986 system_pods.go:89] "coredns-66bc5c9577-5mktl" [86e6103e-b259-44eb-bda7-608ba13635ea] Running
	I1019 17:17:35.867201  279986 system_pods.go:89] "etcd-auto-624324" [d10579f9-f659-4ffd-b07f-0bccb7764993] Running
	I1019 17:17:35.867223  279986 system_pods.go:89] "kindnet-sn8ll" [89a2958a-6f16-45e7-95a8-c808138daf21] Running
	I1019 17:17:35.867232  279986 system_pods.go:89] "kube-apiserver-auto-624324" [8c7a6aad-aa83-4c53-8d67-eec9eb82ed6a] Running
	I1019 17:17:35.867240  279986 system_pods.go:89] "kube-controller-manager-auto-624324" [0fd236da-56f0-4df1-8bc4-820380e4d3d2] Running
	I1019 17:17:35.867251  279986 system_pods.go:89] "kube-proxy-84x4j" [038b0ec3-1c9b-4773-b315-7e649f429afb] Running
	I1019 17:17:35.867258  279986 system_pods.go:89] "kube-scheduler-auto-624324" [a8e7de84-30d1-4e87-b5ad-e36e36b56c20] Running
	I1019 17:17:35.867268  279986 system_pods.go:89] "storage-provisioner" [23f97584-2cde-4e7e-90fd-b78f5809de66] Running
	I1019 17:17:35.867279  279986 system_pods.go:126] duration metric: took 668.091061ms to wait for k8s-apps to be running ...
	I1019 17:17:35.867330  279986 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 17:17:35.867387  279986 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:17:35.888132  279986 system_svc.go:56] duration metric: took 20.734531ms WaitForService to wait for kubelet
	I1019 17:17:35.888168  279986 kubeadm.go:587] duration metric: took 12.519409433s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 17:17:35.888201  279986 node_conditions.go:102] verifying NodePressure condition ...
	I1019 17:17:35.894437  279986 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1019 17:17:35.894471  279986 node_conditions.go:123] node cpu capacity is 8
	I1019 17:17:35.894486  279986 node_conditions.go:105] duration metric: took 6.277959ms to run NodePressure ...
	I1019 17:17:35.894501  279986 start.go:242] waiting for startup goroutines ...
	I1019 17:17:35.894512  279986 start.go:247] waiting for cluster config update ...
	I1019 17:17:35.894530  279986 start.go:256] writing updated cluster config ...
	I1019 17:17:35.894852  279986 ssh_runner.go:195] Run: rm -f paused
	I1019 17:17:35.905871  279986 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 17:17:35.967887  279986 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5mktl" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:17:35.974949  279986 pod_ready.go:94] pod "coredns-66bc5c9577-5mktl" is "Ready"
	I1019 17:17:35.974992  279986 pod_ready.go:86] duration metric: took 7.06813ms for pod "coredns-66bc5c9577-5mktl" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:17:35.977774  279986 pod_ready.go:83] waiting for pod "etcd-auto-624324" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:17:35.982121  279986 pod_ready.go:94] pod "etcd-auto-624324" is "Ready"
	I1019 17:17:35.982190  279986 pod_ready.go:86] duration metric: took 4.395149ms for pod "etcd-auto-624324" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:17:35.984377  279986 pod_ready.go:83] waiting for pod "kube-apiserver-auto-624324" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:17:35.989387  279986 pod_ready.go:94] pod "kube-apiserver-auto-624324" is "Ready"
	I1019 17:17:35.989409  279986 pod_ready.go:86] duration metric: took 4.956684ms for pod "kube-apiserver-auto-624324" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:17:35.991486  279986 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-624324" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:17:36.312207  279986 pod_ready.go:94] pod "kube-controller-manager-auto-624324" is "Ready"
	I1019 17:17:36.312239  279986 pod_ready.go:86] duration metric: took 320.727534ms for pod "kube-controller-manager-auto-624324" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:17:36.511302  279986 pod_ready.go:83] waiting for pod "kube-proxy-84x4j" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:17:36.910102  279986 pod_ready.go:94] pod "kube-proxy-84x4j" is "Ready"
	I1019 17:17:36.910127  279986 pod_ready.go:86] duration metric: took 398.801949ms for pod "kube-proxy-84x4j" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:17:37.111777  279986 pod_ready.go:83] waiting for pod "kube-scheduler-auto-624324" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:17:37.510882  279986 pod_ready.go:94] pod "kube-scheduler-auto-624324" is "Ready"
	I1019 17:17:37.510913  279986 pod_ready.go:86] duration metric: took 399.110949ms for pod "kube-scheduler-auto-624324" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:17:37.510932  279986 pod_ready.go:40] duration metric: took 1.605021737s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 17:17:37.557619  279986 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1019 17:17:37.560255  279986 out.go:179] * Done! kubectl is now configured to use "auto-624324" cluster and "default" namespace by default
	I1019 17:17:36.320995  284195 addons.go:515] duration metric: took 631.042942ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1019 17:17:36.556029  284195 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-624324" context rescaled to 1 replicas
	I1019 17:17:35.523612  289639 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-3731/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-624324:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.514758827s)
	I1019 17:17:35.523650  289639 kic.go:203] duration metric: took 4.514904591s to extract preloaded images to volume ...
	W1019 17:17:35.523800  289639 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1019 17:17:35.523841  289639 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1019 17:17:35.523895  289639 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1019 17:17:35.625654  289639 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-624324 --name calico-624324 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-624324 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-624324 --network calico-624324 --ip 192.168.103.2 --volume calico-624324:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1019 17:17:36.033377  289639 cli_runner.go:164] Run: docker container inspect calico-624324 --format={{.State.Running}}
	I1019 17:17:36.060470  289639 cli_runner.go:164] Run: docker container inspect calico-624324 --format={{.State.Status}}
	I1019 17:17:36.086184  289639 cli_runner.go:164] Run: docker exec calico-624324 stat /var/lib/dpkg/alternatives/iptables
	I1019 17:17:36.142694  289639 oci.go:144] the created container "calico-624324" has a running status.
	I1019 17:17:36.142727  289639 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-3731/.minikube/machines/calico-624324/id_rsa...
	I1019 17:17:36.226603  289639 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-3731/.minikube/machines/calico-624324/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1019 17:17:36.255542  289639 cli_runner.go:164] Run: docker container inspect calico-624324 --format={{.State.Status}}
	I1019 17:17:36.279260  289639 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1019 17:17:36.279289  289639 kic_runner.go:114] Args: [docker exec --privileged calico-624324 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1019 17:17:36.338266  289639 cli_runner.go:164] Run: docker container inspect calico-624324 --format={{.State.Status}}
	I1019 17:17:36.357462  289639 machine.go:94] provisionDockerMachine start ...
	I1019 17:17:36.357558  289639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-624324
	I1019 17:17:36.378591  289639 main.go:143] libmachine: Using SSH client type: native
	I1019 17:17:36.378871  289639 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33114 <nil> <nil>}
	I1019 17:17:36.378888  289639 main.go:143] libmachine: About to run SSH command:
	hostname
	I1019 17:17:36.379632  289639 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34746->127.0.0.1:33114: read: connection reset by peer
	I1019 17:17:39.526597  289639 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-624324
	
	I1019 17:17:39.526623  289639 ubuntu.go:182] provisioning hostname "calico-624324"
	I1019 17:17:39.526695  289639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-624324
	I1019 17:17:39.548785  289639 main.go:143] libmachine: Using SSH client type: native
	I1019 17:17:39.549103  289639 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33114 <nil> <nil>}
	I1019 17:17:39.549123  289639 main.go:143] libmachine: About to run SSH command:
	sudo hostname calico-624324 && echo "calico-624324" | sudo tee /etc/hostname
	I1019 17:17:39.707759  289639 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-624324
	
	I1019 17:17:39.707844  289639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-624324
	I1019 17:17:39.731441  289639 main.go:143] libmachine: Using SSH client type: native
	I1019 17:17:39.731756  289639 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33114 <nil> <nil>}
	I1019 17:17:39.731781  289639 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-624324' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-624324/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-624324' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 17:17:39.877027  289639 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1019 17:17:39.877086  289639 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-3731/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-3731/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-3731/.minikube}
	I1019 17:17:39.877127  289639 ubuntu.go:190] setting up certificates
	I1019 17:17:39.877151  289639 provision.go:84] configureAuth start
	I1019 17:17:39.877226  289639 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-624324
	I1019 17:17:39.898226  289639 provision.go:143] copyHostCerts
	I1019 17:17:39.898290  289639 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-3731/.minikube/ca.pem, removing ...
	I1019 17:17:39.898302  289639 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-3731/.minikube/ca.pem
	I1019 17:17:39.898375  289639 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-3731/.minikube/ca.pem (1082 bytes)
	I1019 17:17:39.898497  289639 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-3731/.minikube/cert.pem, removing ...
	I1019 17:17:39.898509  289639 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-3731/.minikube/cert.pem
	I1019 17:17:39.898554  289639 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-3731/.minikube/cert.pem (1123 bytes)
	I1019 17:17:39.898660  289639 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-3731/.minikube/key.pem, removing ...
	I1019 17:17:39.898673  289639 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-3731/.minikube/key.pem
	I1019 17:17:39.898718  289639 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-3731/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-3731/.minikube/key.pem (1679 bytes)
	I1019 17:17:39.898808  289639 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-3731/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca-key.pem org=jenkins.calico-624324 san=[127.0.0.1 192.168.103.2 calico-624324 localhost minikube]
	I1019 17:17:40.530817  289639 provision.go:177] copyRemoteCerts
	I1019 17:17:40.530880  289639 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 17:17:40.530916  289639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-624324
	I1019 17:17:40.549885  289639 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/calico-624324/id_rsa Username:docker}
	I1019 17:17:40.651146  289639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 17:17:40.671104  289639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1019 17:17:40.688686  289639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 17:17:40.706163  289639 provision.go:87] duration metric: took 828.996298ms to configureAuth
	I1019 17:17:40.706190  289639 ubuntu.go:206] setting minikube options for container-runtime
	I1019 17:17:40.706346  289639 config.go:182] Loaded profile config "calico-624324": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:17:40.706447  289639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-624324
	I1019 17:17:40.726507  289639 main.go:143] libmachine: Using SSH client type: native
	I1019 17:17:40.726845  289639 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33114 <nil> <nil>}
	I1019 17:17:40.726875  289639 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 17:17:40.989826  289639 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 17:17:40.989854  289639 machine.go:97] duration metric: took 4.632367113s to provisionDockerMachine
	I1019 17:17:40.989868  289639 client.go:174] duration metric: took 10.581279306s to LocalClient.Create
	I1019 17:17:40.989900  289639 start.go:167] duration metric: took 10.581364819s to libmachine.API.Create "calico-624324"
	I1019 17:17:40.989914  289639 start.go:293] postStartSetup for "calico-624324" (driver="docker")
	I1019 17:17:40.989926  289639 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 17:17:40.989996  289639 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 17:17:40.990037  289639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-624324
	I1019 17:17:41.011554  289639 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/calico-624324/id_rsa Username:docker}
	I1019 17:17:41.114130  289639 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 17:17:41.118411  289639 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 17:17:41.118440  289639 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 17:17:41.118453  289639 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-3731/.minikube/addons for local assets ...
	I1019 17:17:41.118521  289639 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-3731/.minikube/files for local assets ...
	I1019 17:17:41.118630  289639 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-3731/.minikube/files/etc/ssl/certs/72282.pem -> 72282.pem in /etc/ssl/certs
	I1019 17:17:41.118761  289639 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 17:17:41.126885  289639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3731/.minikube/files/etc/ssl/certs/72282.pem --> /etc/ssl/certs/72282.pem (1708 bytes)
	I1019 17:17:41.148704  289639 start.go:296] duration metric: took 158.777357ms for postStartSetup
	I1019 17:17:41.149145  289639 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-624324
	I1019 17:17:41.169357  289639 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/calico-624324/config.json ...
	I1019 17:17:41.169651  289639 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 17:17:41.169698  289639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-624324
	I1019 17:17:41.187423  289639 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/calico-624324/id_rsa Username:docker}
	I1019 17:17:41.283005  289639 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 17:17:41.288219  289639 start.go:128] duration metric: took 10.885430811s to createHost
	I1019 17:17:41.288241  289639 start.go:83] releasing machines lock for "calico-624324", held for 10.885570534s
	I1019 17:17:41.288298  289639 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-624324
	I1019 17:17:41.308584  289639 ssh_runner.go:195] Run: cat /version.json
	I1019 17:17:41.308641  289639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-624324
	I1019 17:17:41.308713  289639 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 17:17:41.308792  289639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-624324
	I1019 17:17:41.331144  289639 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/calico-624324/id_rsa Username:docker}
	I1019 17:17:41.331694  289639 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/calico-624324/id_rsa Username:docker}
	I1019 17:17:41.500970  289639 ssh_runner.go:195] Run: systemctl --version
	I1019 17:17:41.507631  289639 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 17:17:41.545546  289639 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 17:17:41.550708  289639 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 17:17:41.550784  289639 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 17:17:41.577895  289639 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1019 17:17:41.577923  289639 start.go:496] detecting cgroup driver to use...
	I1019 17:17:41.577960  289639 detect.go:190] detected "systemd" cgroup driver on host os
	I1019 17:17:41.578006  289639 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 17:17:41.595093  289639 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 17:17:41.609422  289639 docker.go:218] disabling cri-docker service (if available) ...
	I1019 17:17:41.609505  289639 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 17:17:41.629677  289639 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 17:17:41.651752  289639 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 17:17:41.772880  289639 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 17:17:41.900192  289639 docker.go:234] disabling docker service ...
	I1019 17:17:41.900248  289639 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 17:17:41.922307  289639 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 17:17:41.937035  289639 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 17:17:42.034148  289639 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 17:17:42.131377  289639 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 17:17:42.147368  289639 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 17:17:42.165524  289639 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 17:17:42.165588  289639 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:17:42.179212  289639 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1019 17:17:42.179285  289639 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:17:42.190777  289639 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:17:42.201395  289639 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:17:42.211404  289639 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 17:17:42.221981  289639 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:17:42.232500  289639 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:17:42.249977  289639 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:17:42.260006  289639 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 17:17:42.268791  289639 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 17:17:42.277032  289639 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:17:42.373441  289639 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 17:17:42.777934  289639 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 17:17:42.778002  289639 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 17:17:42.782314  289639 start.go:564] Will wait 60s for crictl version
	I1019 17:17:42.782369  289639 ssh_runner.go:195] Run: which crictl
	I1019 17:17:42.786273  289639 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 17:17:42.815453  289639 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 17:17:42.815535  289639 ssh_runner.go:195] Run: crio --version
	I1019 17:17:42.852451  289639 ssh_runner.go:195] Run: crio --version
	I1019 17:17:42.888130  289639 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1019 17:17:38.057522  284195 node_ready.go:57] node "kindnet-624324" has "Ready":"False" status (will retry)
	W1019 17:17:40.059622  284195 node_ready.go:57] node "kindnet-624324" has "Ready":"False" status (will retry)
	W1019 17:17:42.669671  284195 node_ready.go:57] node "kindnet-624324" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 19 17:17:03 default-k8s-diff-port-663015 crio[552]: time="2025-10-19T17:17:03.367810595Z" level=info msg="Started container" PID=1713 containerID=18a1535c659f9b363f7d88e14a775d4db65c71f30fad2d40f5abac26ce73a81e description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wz2k5/dashboard-metrics-scraper id=f04e35b7-c21a-4434-9681-c85bf9715924 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6af35ef9d2a2159e56fddc5247be1a66d40c981bd05b4f663689615210175014
	Oct 19 17:17:04 default-k8s-diff-port-663015 crio[552]: time="2025-10-19T17:17:04.329452873Z" level=info msg="Removing container: 80c72d2cb2d1b3ff54ddd4ed079aa6d16f2bebd0cb99d9dd7464a60d06e79a77" id=88420c50-0d07-4321-b8f2-9d3946fe92ab name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 17:17:04 default-k8s-diff-port-663015 crio[552]: time="2025-10-19T17:17:04.339468195Z" level=info msg="Removed container 80c72d2cb2d1b3ff54ddd4ed079aa6d16f2bebd0cb99d9dd7464a60d06e79a77: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wz2k5/dashboard-metrics-scraper" id=88420c50-0d07-4321-b8f2-9d3946fe92ab name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 17:17:20 default-k8s-diff-port-663015 crio[552]: time="2025-10-19T17:17:20.374922553Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=2c74a611-197b-4a70-be2a-7fe30bdf1e62 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:17:20 default-k8s-diff-port-663015 crio[552]: time="2025-10-19T17:17:20.376105319Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=d68b9017-011b-4943-85a7-d3e1b56ec779 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:17:20 default-k8s-diff-port-663015 crio[552]: time="2025-10-19T17:17:20.377577433Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=95de63c1-55ea-4e74-8628-692bab90b918 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:17:20 default-k8s-diff-port-663015 crio[552]: time="2025-10-19T17:17:20.377847054Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:17:20 default-k8s-diff-port-663015 crio[552]: time="2025-10-19T17:17:20.382437547Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:17:20 default-k8s-diff-port-663015 crio[552]: time="2025-10-19T17:17:20.382849775Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/a16bd65f5a14248e967fc52c24b65d730a21da85271a68c2aa835878beca85cd/merged/etc/passwd: no such file or directory"
	Oct 19 17:17:20 default-k8s-diff-port-663015 crio[552]: time="2025-10-19T17:17:20.382879286Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/a16bd65f5a14248e967fc52c24b65d730a21da85271a68c2aa835878beca85cd/merged/etc/group: no such file or directory"
	Oct 19 17:17:20 default-k8s-diff-port-663015 crio[552]: time="2025-10-19T17:17:20.383313416Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:17:20 default-k8s-diff-port-663015 crio[552]: time="2025-10-19T17:17:20.414736647Z" level=info msg="Created container 4bc7eb843c66297e1cde0c1c9ec4523bf5b08e853c04e2abf91e040c7011df9d: kube-system/storage-provisioner/storage-provisioner" id=95de63c1-55ea-4e74-8628-692bab90b918 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:17:20 default-k8s-diff-port-663015 crio[552]: time="2025-10-19T17:17:20.415721707Z" level=info msg="Starting container: 4bc7eb843c66297e1cde0c1c9ec4523bf5b08e853c04e2abf91e040c7011df9d" id=25d03c06-65fe-441c-8c92-50b42ab21dbb name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 17:17:20 default-k8s-diff-port-663015 crio[552]: time="2025-10-19T17:17:20.418189609Z" level=info msg="Started container" PID=1727 containerID=4bc7eb843c66297e1cde0c1c9ec4523bf5b08e853c04e2abf91e040c7011df9d description=kube-system/storage-provisioner/storage-provisioner id=25d03c06-65fe-441c-8c92-50b42ab21dbb name=/runtime.v1.RuntimeService/StartContainer sandboxID=2901d7666db5f343407709b25c0673c122903f1b8623ee9e685a5121d48921f5
	Oct 19 17:17:25 default-k8s-diff-port-663015 crio[552]: time="2025-10-19T17:17:25.225547156Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=bf653993-975b-453d-ae37-bdb394a7f960 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:17:25 default-k8s-diff-port-663015 crio[552]: time="2025-10-19T17:17:25.22650212Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=578b51f5-970a-4a8e-a856-52df2c14a5f4 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:17:25 default-k8s-diff-port-663015 crio[552]: time="2025-10-19T17:17:25.227593849Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wz2k5/dashboard-metrics-scraper" id=4d6491c8-8433-46f3-86e1-268b0e9c967d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:17:25 default-k8s-diff-port-663015 crio[552]: time="2025-10-19T17:17:25.227809472Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:17:25 default-k8s-diff-port-663015 crio[552]: time="2025-10-19T17:17:25.233437488Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:17:25 default-k8s-diff-port-663015 crio[552]: time="2025-10-19T17:17:25.234138992Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:17:25 default-k8s-diff-port-663015 crio[552]: time="2025-10-19T17:17:25.263734099Z" level=info msg="Created container 90266bf26f9f357afcc2eaa3c72132271f6bad2d3b47118e66f773e0407d9502: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wz2k5/dashboard-metrics-scraper" id=4d6491c8-8433-46f3-86e1-268b0e9c967d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:17:25 default-k8s-diff-port-663015 crio[552]: time="2025-10-19T17:17:25.264439498Z" level=info msg="Starting container: 90266bf26f9f357afcc2eaa3c72132271f6bad2d3b47118e66f773e0407d9502" id=24da0f3a-e842-4daf-8794-19a8433b9ef6 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 17:17:25 default-k8s-diff-port-663015 crio[552]: time="2025-10-19T17:17:25.266603348Z" level=info msg="Started container" PID=1741 containerID=90266bf26f9f357afcc2eaa3c72132271f6bad2d3b47118e66f773e0407d9502 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wz2k5/dashboard-metrics-scraper id=24da0f3a-e842-4daf-8794-19a8433b9ef6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6af35ef9d2a2159e56fddc5247be1a66d40c981bd05b4f663689615210175014
	Oct 19 17:17:25 default-k8s-diff-port-663015 crio[552]: time="2025-10-19T17:17:25.394729515Z" level=info msg="Removing container: 18a1535c659f9b363f7d88e14a775d4db65c71f30fad2d40f5abac26ce73a81e" id=75fc2e8b-4278-478c-b311-8e4dbec278c0 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 17:17:25 default-k8s-diff-port-663015 crio[552]: time="2025-10-19T17:17:25.408191429Z" level=info msg="Removed container 18a1535c659f9b363f7d88e14a775d4db65c71f30fad2d40f5abac26ce73a81e: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wz2k5/dashboard-metrics-scraper" id=75fc2e8b-4278-478c-b311-8e4dbec278c0 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	90266bf26f9f3       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           19 seconds ago      Exited              dashboard-metrics-scraper   2                   6af35ef9d2a21       dashboard-metrics-scraper-6ffb444bf9-wz2k5             kubernetes-dashboard
	4bc7eb843c662       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           23 seconds ago      Running             storage-provisioner         1                   2901d7666db5f       storage-provisioner                                    kube-system
	955e744fa1a45       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   45 seconds ago      Running             kubernetes-dashboard        0                   065342cb78138       kubernetes-dashboard-855c9754f9-kr5fp                  kubernetes-dashboard
	2429e7f2aaaa2       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           54 seconds ago      Running             busybox                     1                   48b9efd29f126       busybox                                                default
	5549fb115a4d8       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           54 seconds ago      Running             coredns                     0                   7bba9b91268ce       coredns-66bc5c9577-2r8tf                               kube-system
	90c70b291b7dd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         0                   2901d7666db5f       storage-provisioner                                    kube-system
	cf5b36f1b4008       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           54 seconds ago      Running             kube-proxy                  0                   f7787aa1db132       kube-proxy-g62dn                                       kube-system
	15343a83908f8       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           54 seconds ago      Running             kindnet-cni                 0                   64e6f770e0171       kindnet-rrthg                                          kube-system
	6f5702f98db02       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           57 seconds ago      Running             kube-apiserver              0                   9fb8b70fb0e4a       kube-apiserver-default-k8s-diff-port-663015            kube-system
	0198767b0edb6       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           57 seconds ago      Running             kube-scheduler              0                   9ecb9760ac032       kube-scheduler-default-k8s-diff-port-663015            kube-system
	98c9671492774       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           57 seconds ago      Running             kube-controller-manager     0                   59c38f3fffda0       kube-controller-manager-default-k8s-diff-port-663015   kube-system
	79c3046dfcac2       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           57 seconds ago      Running             etcd                        0                   f720bde5c5e57       etcd-default-k8s-diff-port-663015                      kube-system
	
	
	==> coredns [5549fb115a4d8128a67647e53adefbd5f1396f4eca49ed1c46a0a85127887340] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48677 - 18927 "HINFO IN 1165854263028167530.7393025748261522906. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.06438571s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-663015
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-663015
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34
	                    minikube.k8s.io/name=default-k8s-diff-port-663015
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T17_15_54_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 17:15:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-663015
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 17:17:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 17:17:19 +0000   Sun, 19 Oct 2025 17:15:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 17:17:19 +0000   Sun, 19 Oct 2025 17:15:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 17:17:19 +0000   Sun, 19 Oct 2025 17:15:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 17:17:19 +0000   Sun, 19 Oct 2025 17:16:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-663015
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                e7d4d908-64b0-4858-bf62-c6148a998433
	  Boot ID:                    74c6737e-3c93-45ee-b810-4373d2d70b9b
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-66bc5c9577-2r8tf                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     106s
	  kube-system                 etcd-default-k8s-diff-port-663015                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         113s
	  kube-system                 kindnet-rrthg                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      106s
	  kube-system                 kube-apiserver-default-k8s-diff-port-663015             250m (3%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-663015    200m (2%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-proxy-g62dn                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-default-k8s-diff-port-663015             100m (1%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-wz2k5              0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-kr5fp                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 105s               kube-proxy       
	  Normal  Starting                 54s                kube-proxy       
	  Normal  NodeHasSufficientMemory  111s               kubelet          Node default-k8s-diff-port-663015 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    111s               kubelet          Node default-k8s-diff-port-663015 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     111s               kubelet          Node default-k8s-diff-port-663015 status is now: NodeHasSufficientPID
	  Normal  Starting                 111s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           107s               node-controller  Node default-k8s-diff-port-663015 event: Registered Node default-k8s-diff-port-663015 in Controller
	  Normal  NodeReady                95s                kubelet          Node default-k8s-diff-port-663015 status is now: NodeReady
	  Normal  Starting                 58s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s (x8 over 58s)  kubelet          Node default-k8s-diff-port-663015 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x8 over 58s)  kubelet          Node default-k8s-diff-port-663015 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x8 over 58s)  kubelet          Node default-k8s-diff-port-663015 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           52s                node-controller  Node default-k8s-diff-port-663015 event: Registered Node default-k8s-diff-port-663015 in Controller
	
	
	==> dmesg <==
	[  +0.102617] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027869] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.458770] kauditd_printk_skb: 47 callbacks suppressed
	[Oct19 16:23] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.054620] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023908] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023878] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023848] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +1.023943] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +2.047764] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +4.031517] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[  +8.127172] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[ +16.382276] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	[Oct19 16:24] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 9a e2 4b 82 f8 b0 de 5b c7 8c 8c 79 08 00
	
	
	==> etcd [79c3046dfcac29d78ffef04f805bf4024716c53ca40c15dca8f18dfd42988854] <==
	{"level":"warn","ts":"2025-10-19T17:16:48.059221Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:48.074854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:48.082853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:48.092009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:48.100646Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:48.109542Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:48.116472Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:48.124954Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:48.133429Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:48.141663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:48.149163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:48.157292Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:48.164890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:48.172527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:48.187004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:48.194575Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:48.202248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:48.282974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33834","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-19T17:16:57.501369Z","caller":"traceutil/trace.go:172","msg":"trace[139943372] transaction","detail":"{read_only:false; response_revision:560; number_of_response:1; }","duration":"126.124889ms","start":"2025-10-19T17:16:57.375222Z","end":"2025-10-19T17:16:57.501347Z","steps":["trace[139943372] 'process raft request'  (duration: 125.997011ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T17:16:59.042403Z","caller":"traceutil/trace.go:172","msg":"trace[1399571219] transaction","detail":"{read_only:false; response_revision:562; number_of_response:1; }","duration":"125.448371ms","start":"2025-10-19T17:16:58.916918Z","end":"2025-10-19T17:16:59.042367Z","steps":["trace[1399571219] 'process raft request'  (duration: 125.336826ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T17:16:59.502012Z","caller":"traceutil/trace.go:172","msg":"trace[1774317239] linearizableReadLoop","detail":"{readStateIndex:593; appliedIndex:593; }","duration":"131.644674ms","start":"2025-10-19T17:16:59.370341Z","end":"2025-10-19T17:16:59.501986Z","steps":["trace[1774317239] 'read index received'  (duration: 131.634571ms)","trace[1774317239] 'applied index is now lower than readState.Index'  (duration: 8.784µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-19T17:16:59.542446Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"172.048651ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-19T17:16:59.542541Z","caller":"traceutil/trace.go:172","msg":"trace[1086183679] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:565; }","duration":"172.192772ms","start":"2025-10-19T17:16:59.370331Z","end":"2025-10-19T17:16:59.542524Z","steps":["trace[1086183679] 'agreement among raft nodes before linearized reading'  (duration: 131.748702ms)","trace[1086183679] 'range keys from in-memory index tree'  (duration: 40.264735ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-19T17:16:59.542626Z","caller":"traceutil/trace.go:172","msg":"trace[601624699] transaction","detail":"{read_only:false; response_revision:567; number_of_response:1; }","duration":"146.756806ms","start":"2025-10-19T17:16:59.395857Z","end":"2025-10-19T17:16:59.542613Z","steps":["trace[601624699] 'process raft request'  (duration: 146.707103ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T17:16:59.542783Z","caller":"traceutil/trace.go:172","msg":"trace[282571166] transaction","detail":"{read_only:false; response_revision:566; number_of_response:1; }","duration":"251.26124ms","start":"2025-10-19T17:16:59.291503Z","end":"2025-10-19T17:16:59.542764Z","steps":["trace[282571166] 'process raft request'  (duration: 210.542951ms)","trace[282571166] 'compare'  (duration: 40.364474ms)"],"step_count":2}
	
	
	==> kernel <==
	 17:17:44 up  1:00,  0 user,  load average: 5.37, 3.64, 2.15
	Linux default-k8s-diff-port-663015 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [15343a83908f85231015b0d8768253b6b0aae7ec917d83ac88ef6e5b58711ebc] <==
	I1019 17:16:49.844752       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 17:16:49.845031       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1019 17:16:49.845218       1 main.go:148] setting mtu 1500 for CNI 
	I1019 17:16:49.845249       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 17:16:49.845274       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T17:16:50Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 17:16:50.045316       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 17:16:50.045354       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 17:16:50.045378       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 17:16:50.139252       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1019 17:16:50.589041       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 17:16:50.589096       1 metrics.go:72] Registering metrics
	I1019 17:16:50.589188       1 controller.go:711] "Syncing nftables rules"
	I1019 17:17:00.047164       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 17:17:00.047311       1 main.go:301] handling current node
	I1019 17:17:10.048727       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 17:17:10.048766       1 main.go:301] handling current node
	I1019 17:17:20.045239       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 17:17:20.045297       1 main.go:301] handling current node
	I1019 17:17:30.048140       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 17:17:30.048180       1 main.go:301] handling current node
	I1019 17:17:40.052149       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 17:17:40.052190       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6f5702f98db02fecf8ffffae08c89809549267cd099ea38ec1f43f04d2849238] <==
	I1019 17:16:48.817861       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1019 17:16:48.818212       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1019 17:16:48.819881       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1019 17:16:48.820081       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1019 17:16:48.820215       1 aggregator.go:171] initial CRD sync complete...
	I1019 17:16:48.820256       1 autoregister_controller.go:144] Starting autoregister controller
	I1019 17:16:48.820282       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1019 17:16:48.820291       1 cache.go:39] Caches are synced for autoregister controller
	I1019 17:16:48.826138       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1019 17:16:48.838170       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1019 17:16:48.847099       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1019 17:16:48.847138       1 policy_source.go:240] refreshing policies
	I1019 17:16:48.870775       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 17:16:49.160747       1 controller.go:667] quota admission added evaluator for: namespaces
	I1019 17:16:49.189719       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1019 17:16:49.210944       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 17:16:49.220996       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 17:16:49.235229       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 17:16:49.306346       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.246.23"}
	I1019 17:16:49.322180       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.157.229"}
	I1019 17:16:49.720717       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 17:16:52.395445       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 17:16:52.546083       1 controller.go:667] quota admission added evaluator for: endpoints
	I1019 17:16:52.546100       1 controller.go:667] quota admission added evaluator for: endpoints
	I1019 17:16:52.596659       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [98c96714927741271a866cf42303c32a2f1bcbff5d4fcfbf3eb2a3e8d6e376c1] <==
	I1019 17:16:52.112058       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1019 17:16:52.112134       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1019 17:16:52.112141       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1019 17:16:52.112148       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1019 17:16:52.113248       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1019 17:16:52.115441       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1019 17:16:52.116906       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1019 17:16:52.142300       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1019 17:16:52.142395       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1019 17:16:52.142423       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1019 17:16:52.142477       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1019 17:16:52.142496       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1019 17:16:52.142371       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1019 17:16:52.142384       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1019 17:16:52.142364       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1019 17:16:52.142337       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1019 17:16:52.143445       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1019 17:16:52.152343       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 17:16:52.157465       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 17:16:52.159227       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1019 17:16:52.164471       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1019 17:16:52.177426       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 17:16:52.192936       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 17:16:52.193058       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1019 17:16:52.193093       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [cf5b36f1b400873a6f64ccde1cbf959c0adf80a4cbee8a27050d0adb93e938aa] <==
	I1019 17:16:49.664926       1 server_linux.go:53] "Using iptables proxy"
	I1019 17:16:49.723283       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 17:16:49.823951       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 17:16:49.824111       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1019 17:16:49.824242       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 17:16:49.849019       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 17:16:49.849120       1 server_linux.go:132] "Using iptables Proxier"
	I1019 17:16:49.855743       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 17:16:49.856323       1 server.go:527] "Version info" version="v1.34.1"
	I1019 17:16:49.856455       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:16:49.859457       1 config.go:200] "Starting service config controller"
	I1019 17:16:49.859481       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 17:16:49.859502       1 config.go:106] "Starting endpoint slice config controller"
	I1019 17:16:49.859507       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 17:16:49.859523       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 17:16:49.859528       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 17:16:49.859923       1 config.go:309] "Starting node config controller"
	I1019 17:16:49.859961       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 17:16:49.959756       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 17:16:49.959769       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 17:16:49.959773       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1019 17:16:49.960319       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [0198767b0edb6f90348a6cb47c20f3c0c5d712ddfcdc06a79eb89a2396dc856b] <==
	I1019 17:16:47.210525       1 serving.go:386] Generated self-signed cert in-memory
	W1019 17:16:48.752862       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1019 17:16:48.753147       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1019 17:16:48.753165       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1019 17:16:48.753176       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1019 17:16:48.813728       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1019 17:16:48.813758       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:16:48.816801       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 17:16:48.816846       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 17:16:48.818811       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1019 17:16:48.818906       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1019 17:16:48.917270       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 19 17:16:52 default-k8s-diff-port-663015 kubelet[711]: I1019 17:16:52.874565     711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/caaa2764-cc2e-4a6c-a8b3-45bb63d04684-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-kr5fp\" (UID: \"caaa2764-cc2e-4a6c-a8b3-45bb63d04684\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kr5fp"
	Oct 19 17:16:52 default-k8s-diff-port-663015 kubelet[711]: I1019 17:16:52.874625     711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znlws\" (UniqueName: \"kubernetes.io/projected/3e29db7a-65b5-4974-b566-184d80eaa717-kube-api-access-znlws\") pod \"dashboard-metrics-scraper-6ffb444bf9-wz2k5\" (UID: \"3e29db7a-65b5-4974-b566-184d80eaa717\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wz2k5"
	Oct 19 17:16:52 default-k8s-diff-port-663015 kubelet[711]: I1019 17:16:52.874643     711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/3e29db7a-65b5-4974-b566-184d80eaa717-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-wz2k5\" (UID: \"3e29db7a-65b5-4974-b566-184d80eaa717\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wz2k5"
	Oct 19 17:16:52 default-k8s-diff-port-663015 kubelet[711]: I1019 17:16:52.874665     711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9lhv\" (UniqueName: \"kubernetes.io/projected/caaa2764-cc2e-4a6c-a8b3-45bb63d04684-kube-api-access-h9lhv\") pod \"kubernetes-dashboard-855c9754f9-kr5fp\" (UID: \"caaa2764-cc2e-4a6c-a8b3-45bb63d04684\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kr5fp"
	Oct 19 17:16:55 default-k8s-diff-port-663015 kubelet[711]: I1019 17:16:55.582962     711 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 19 17:17:01 default-k8s-diff-port-663015 kubelet[711]: I1019 17:17:01.590009     711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kr5fp" podStartSLOduration=3.786112179 podStartE2EDuration="9.589986972s" podCreationTimestamp="2025-10-19 17:16:52 +0000 UTC" firstStartedPulling="2025-10-19 17:16:53.110184825 +0000 UTC m=+6.976881850" lastFinishedPulling="2025-10-19 17:16:58.914059613 +0000 UTC m=+12.780756643" observedRunningTime="2025-10-19 17:17:00.330286018 +0000 UTC m=+14.196983073" watchObservedRunningTime="2025-10-19 17:17:01.589986972 +0000 UTC m=+15.456684009"
	Oct 19 17:17:03 default-k8s-diff-port-663015 kubelet[711]: I1019 17:17:03.322761     711 scope.go:117] "RemoveContainer" containerID="80c72d2cb2d1b3ff54ddd4ed079aa6d16f2bebd0cb99d9dd7464a60d06e79a77"
	Oct 19 17:17:04 default-k8s-diff-port-663015 kubelet[711]: I1019 17:17:04.328060     711 scope.go:117] "RemoveContainer" containerID="80c72d2cb2d1b3ff54ddd4ed079aa6d16f2bebd0cb99d9dd7464a60d06e79a77"
	Oct 19 17:17:04 default-k8s-diff-port-663015 kubelet[711]: I1019 17:17:04.328250     711 scope.go:117] "RemoveContainer" containerID="18a1535c659f9b363f7d88e14a775d4db65c71f30fad2d40f5abac26ce73a81e"
	Oct 19 17:17:04 default-k8s-diff-port-663015 kubelet[711]: E1019 17:17:04.328467     711 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wz2k5_kubernetes-dashboard(3e29db7a-65b5-4974-b566-184d80eaa717)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wz2k5" podUID="3e29db7a-65b5-4974-b566-184d80eaa717"
	Oct 19 17:17:05 default-k8s-diff-port-663015 kubelet[711]: I1019 17:17:05.332896     711 scope.go:117] "RemoveContainer" containerID="18a1535c659f9b363f7d88e14a775d4db65c71f30fad2d40f5abac26ce73a81e"
	Oct 19 17:17:05 default-k8s-diff-port-663015 kubelet[711]: E1019 17:17:05.333147     711 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wz2k5_kubernetes-dashboard(3e29db7a-65b5-4974-b566-184d80eaa717)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wz2k5" podUID="3e29db7a-65b5-4974-b566-184d80eaa717"
	Oct 19 17:17:12 default-k8s-diff-port-663015 kubelet[711]: I1019 17:17:12.294763     711 scope.go:117] "RemoveContainer" containerID="18a1535c659f9b363f7d88e14a775d4db65c71f30fad2d40f5abac26ce73a81e"
	Oct 19 17:17:12 default-k8s-diff-port-663015 kubelet[711]: E1019 17:17:12.294974     711 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wz2k5_kubernetes-dashboard(3e29db7a-65b5-4974-b566-184d80eaa717)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wz2k5" podUID="3e29db7a-65b5-4974-b566-184d80eaa717"
	Oct 19 17:17:20 default-k8s-diff-port-663015 kubelet[711]: I1019 17:17:20.374408     711 scope.go:117] "RemoveContainer" containerID="90c70b291b7ddbd6ff065c1772c5f6c1c6e80cc77afb11310f43bb3d05243b25"
	Oct 19 17:17:25 default-k8s-diff-port-663015 kubelet[711]: I1019 17:17:25.225030     711 scope.go:117] "RemoveContainer" containerID="18a1535c659f9b363f7d88e14a775d4db65c71f30fad2d40f5abac26ce73a81e"
	Oct 19 17:17:25 default-k8s-diff-port-663015 kubelet[711]: I1019 17:17:25.392707     711 scope.go:117] "RemoveContainer" containerID="18a1535c659f9b363f7d88e14a775d4db65c71f30fad2d40f5abac26ce73a81e"
	Oct 19 17:17:25 default-k8s-diff-port-663015 kubelet[711]: I1019 17:17:25.393187     711 scope.go:117] "RemoveContainer" containerID="90266bf26f9f357afcc2eaa3c72132271f6bad2d3b47118e66f773e0407d9502"
	Oct 19 17:17:25 default-k8s-diff-port-663015 kubelet[711]: E1019 17:17:25.393868     711 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wz2k5_kubernetes-dashboard(3e29db7a-65b5-4974-b566-184d80eaa717)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wz2k5" podUID="3e29db7a-65b5-4974-b566-184d80eaa717"
	Oct 19 17:17:32 default-k8s-diff-port-663015 kubelet[711]: I1019 17:17:32.295388     711 scope.go:117] "RemoveContainer" containerID="90266bf26f9f357afcc2eaa3c72132271f6bad2d3b47118e66f773e0407d9502"
	Oct 19 17:17:32 default-k8s-diff-port-663015 kubelet[711]: E1019 17:17:32.295595     711 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wz2k5_kubernetes-dashboard(3e29db7a-65b5-4974-b566-184d80eaa717)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wz2k5" podUID="3e29db7a-65b5-4974-b566-184d80eaa717"
	Oct 19 17:17:39 default-k8s-diff-port-663015 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 19 17:17:39 default-k8s-diff-port-663015 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 19 17:17:39 default-k8s-diff-port-663015 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 19 17:17:39 default-k8s-diff-port-663015 systemd[1]: kubelet.service: Consumed 1.788s CPU time.
	
	
	==> kubernetes-dashboard [955e744fa1a4514fad91045a63abd0edc7a8e64dcf7069fcb10271b34fac88fe] <==
	2025/10/19 17:16:59 Using namespace: kubernetes-dashboard
	2025/10/19 17:16:59 Using in-cluster config to connect to apiserver
	2025/10/19 17:16:59 Using secret token for csrf signing
	2025/10/19 17:16:59 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/19 17:16:59 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/19 17:16:59 Successful initial request to the apiserver, version: v1.34.1
	2025/10/19 17:16:59 Generating JWE encryption key
	2025/10/19 17:16:59 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/19 17:16:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/19 17:16:59 Initializing JWE encryption key from synchronized object
	2025/10/19 17:16:59 Creating in-cluster Sidecar client
	2025/10/19 17:16:59 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/19 17:16:59 Serving insecurely on HTTP port: 9090
	2025/10/19 17:17:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/19 17:16:59 Starting overwatch
	
	
	==> storage-provisioner [4bc7eb843c66297e1cde0c1c9ec4523bf5b08e853c04e2abf91e040c7011df9d] <==
	I1019 17:17:20.431465       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1019 17:17:20.439728       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1019 17:17:20.439781       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1019 17:17:20.442212       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:17:23.897490       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:17:28.158592       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:17:31.757631       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:17:34.811513       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:17:37.834285       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:17:37.838814       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 17:17:37.838958       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1019 17:17:37.839107       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e5184909-d0d7-4566-badd-0d775b85f21e", APIVersion:"v1", ResourceVersion:"624", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-663015_4652e047-0cee-40bb-8deb-0e34af4c79ff became leader
	I1019 17:17:37.839156       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-663015_4652e047-0cee-40bb-8deb-0e34af4c79ff!
	W1019 17:17:37.842036       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:17:37.845372       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 17:17:37.939384       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-663015_4652e047-0cee-40bb-8deb-0e34af4c79ff!
	W1019 17:17:39.848712       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:17:39.853059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:17:41.856539       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:17:41.860989       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:17:43.864566       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:17:43.868651       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [90c70b291b7ddbd6ff065c1772c5f6c1c6e80cc77afb11310f43bb3d05243b25] <==
	I1019 17:16:49.622757       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1019 17:17:19.625609       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-663015 -n default-k8s-diff-port-663015
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-663015 -n default-k8s-diff-port-663015: exit status 2 (342.089419ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-663015 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (6.09s)
E1019 17:19:47.449213    7228 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/no-preload-806996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (260/327)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 15.55
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.23
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 4
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.06
18 TestDownloadOnly/v1.34.1/DeleteAll 0.23
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
20 TestDownloadOnlyKic 0.41
21 TestBinaryMirror 0.82
22 TestOffline 57.86
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 131.56
31 TestAddons/serial/GCPAuth/Namespaces 0.13
32 TestAddons/serial/GCPAuth/FakeCredentials 7.46
48 TestAddons/StoppedEnableDisable 18.54
49 TestCertOptions 33.64
50 TestCertExpiration 217.49
52 TestForceSystemdFlag 35.18
53 TestForceSystemdEnv 31.25
55 TestKVMDriverInstallOrUpdate 1.17
59 TestErrorSpam/setup 20.54
60 TestErrorSpam/start 0.63
61 TestErrorSpam/status 0.91
62 TestErrorSpam/pause 6.65
63 TestErrorSpam/unpause 5.42
64 TestErrorSpam/stop 2.59
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 71.43
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 6.26
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.09
75 TestFunctional/serial/CacheCmd/cache/add_remote 2.87
76 TestFunctional/serial/CacheCmd/cache/add_local 1.16
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.53
81 TestFunctional/serial/CacheCmd/cache/delete 0.1
82 TestFunctional/serial/MinikubeKubectlCmd 0.11
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
84 TestFunctional/serial/ExtraConfig 67.88
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.27
87 TestFunctional/serial/LogsFileCmd 1.28
88 TestFunctional/serial/InvalidService 3.78
90 TestFunctional/parallel/ConfigCmd 0.37
92 TestFunctional/parallel/DryRun 0.48
93 TestFunctional/parallel/InternationalLanguage 0.16
94 TestFunctional/parallel/StatusCmd 1.09
99 TestFunctional/parallel/AddonsCmd 0.13
102 TestFunctional/parallel/SSHCmd 0.56
103 TestFunctional/parallel/CpCmd 1.86
105 TestFunctional/parallel/FileSync 0.28
106 TestFunctional/parallel/CertSync 1.7
110 TestFunctional/parallel/NodeLabels 0.06
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.56
114 TestFunctional/parallel/License 0.46
116 TestFunctional/parallel/ProfileCmd/profile_not_create 0.45
117 TestFunctional/parallel/ProfileCmd/profile_list 0.45
118 TestFunctional/parallel/ProfileCmd/profile_json_output 0.45
119 TestFunctional/parallel/MountCmd/any-port 7.87
121 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.43
122 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
124 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.21
125 TestFunctional/parallel/MountCmd/specific-port 2.03
126 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
127 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
131 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
132 TestFunctional/parallel/MountCmd/VerifyCleanup 1.58
133 TestFunctional/parallel/Version/short 0.06
134 TestFunctional/parallel/Version/components 0.48
135 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
136 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
137 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
138 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
139 TestFunctional/parallel/ImageCommands/ImageBuild 2.26
140 TestFunctional/parallel/ImageCommands/Setup 0.99
143 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
144 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
145 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
148 TestFunctional/parallel/ImageCommands/ImageRemove 0.5
151 TestFunctional/parallel/ServiceCmd/List 1.71
152 TestFunctional/parallel/ServiceCmd/JSONOutput 1.7
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 110.72
164 TestMultiControlPlane/serial/DeployApp 5.2
165 TestMultiControlPlane/serial/PingHostFromPods 0.94
166 TestMultiControlPlane/serial/AddWorkerNode 27.23
167 TestMultiControlPlane/serial/NodeLabels 0.06
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.88
169 TestMultiControlPlane/serial/CopyFile 16.48
170 TestMultiControlPlane/serial/StopSecondaryNode 19.23
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.7
172 TestMultiControlPlane/serial/RestartSecondaryNode 8.88
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.88
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 103.65
175 TestMultiControlPlane/serial/DeleteSecondaryNode 10.61
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.69
177 TestMultiControlPlane/serial/StopCluster 41.23
178 TestMultiControlPlane/serial/RestartCluster 50.18
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.7
180 TestMultiControlPlane/serial/AddSecondaryNode 35.84
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.91
185 TestJSONOutput/start/Command 40.55
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 8
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.22
210 TestKicCustomNetwork/create_custom_network 29.06
211 TestKicCustomNetwork/use_default_bridge_network 25.38
212 TestKicExistingNetwork 25.77
213 TestKicCustomSubnet 24.46
214 TestKicStaticIP 24.97
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 49.61
219 TestMountStart/serial/StartWithMountFirst 8.89
220 TestMountStart/serial/VerifyMountFirst 0.26
221 TestMountStart/serial/StartWithMountSecond 8.26
222 TestMountStart/serial/VerifyMountSecond 0.27
223 TestMountStart/serial/DeleteFirst 1.73
224 TestMountStart/serial/VerifyMountPostDelete 0.26
225 TestMountStart/serial/Stop 1.26
226 TestMountStart/serial/RestartStopped 7.29
227 TestMountStart/serial/VerifyMountPostStop 0.27
230 TestMultiNode/serial/FreshStart2Nodes 61.82
231 TestMultiNode/serial/DeployApp2Nodes 3.39
232 TestMultiNode/serial/PingHostFrom2Pods 0.67
233 TestMultiNode/serial/AddNode 23.13
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.66
236 TestMultiNode/serial/CopyFile 9.63
237 TestMultiNode/serial/StopNode 2.25
238 TestMultiNode/serial/StartAfterStop 7.24
239 TestMultiNode/serial/RestartKeepsNodes 78.68
240 TestMultiNode/serial/DeleteNode 5.29
241 TestMultiNode/serial/StopMultiNode 28.59
242 TestMultiNode/serial/RestartMultiNode 46.04
243 TestMultiNode/serial/ValidateNameConflict 24.71
250 TestScheduledStopUnix 98.68
253 TestInsufficientStorage 10.54
254 TestRunningBinaryUpgrade 64.87
256 TestKubernetesUpgrade 305.37
257 TestMissingContainerUpgrade 63.67
264 TestPause/serial/Start 55.31
268 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
269 TestNoKubernetes/serial/StartWithK8s 37.56
270 TestNoKubernetes/serial/StartWithStopK8s 24.61
271 TestPause/serial/SecondStartNoReconfiguration 6.55
279 TestNetworkPlugins/group/false 3.97
281 TestNoKubernetes/serial/Start 5.07
285 TestNoKubernetes/serial/VerifyK8sNotRunning 0.32
286 TestNoKubernetes/serial/ProfileList 1.28
287 TestNoKubernetes/serial/Stop 12.94
288 TestNoKubernetes/serial/StartNoArgs 6.84
289 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.34
290 TestStoppedBinaryUpgrade/Setup 0.4
291 TestStoppedBinaryUpgrade/Upgrade 45.77
292 TestStoppedBinaryUpgrade/MinikubeLogs 0.96
294 TestStartStop/group/old-k8s-version/serial/FirstStart 49.9
296 TestStartStop/group/no-preload/serial/FirstStart 53.48
297 TestStartStop/group/old-k8s-version/serial/DeployApp 9.3
299 TestStartStop/group/old-k8s-version/serial/Stop 16.1
300 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
301 TestStartStop/group/old-k8s-version/serial/SecondStart 51.39
302 TestStartStop/group/no-preload/serial/DeployApp 7.25
304 TestStartStop/group/no-preload/serial/Stop 16.72
305 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
306 TestStartStop/group/no-preload/serial/SecondStart 45.26
307 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
309 TestStartStop/group/embed-certs/serial/FirstStart 40.72
310 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
311 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
314 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 40.45
315 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
316 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
317 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
319 TestStartStop/group/embed-certs/serial/DeployApp 8.26
321 TestStartStop/group/newest-cni/serial/FirstStart 27.8
323 TestStartStop/group/embed-certs/serial/Stop 18.11
324 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 7.25
326 TestStartStop/group/default-k8s-diff-port/serial/Stop 16.24
327 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
328 TestStartStop/group/embed-certs/serial/SecondStart 44.47
329 TestStartStop/group/newest-cni/serial/DeployApp 0
331 TestStartStop/group/newest-cni/serial/Stop 2.42
332 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
333 TestStartStop/group/newest-cni/serial/SecondStart 11
334 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
335 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 49.58
336 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
337 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
338 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
340 TestNetworkPlugins/group/auto/Start 44.94
341 TestNetworkPlugins/group/kindnet/Start 41.51
342 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
343 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
344 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.22
346 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
347 TestNetworkPlugins/group/calico/Start 55.91
348 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
349 TestNetworkPlugins/group/auto/KubeletFlags 0.28
350 TestNetworkPlugins/group/auto/NetCatPod 9.18
351 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.27
353 TestNetworkPlugins/group/auto/DNS 0.13
354 TestNetworkPlugins/group/auto/Localhost 0.09
355 TestNetworkPlugins/group/auto/HairPin 0.09
356 TestNetworkPlugins/group/custom-flannel/Start 54.12
357 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
358 TestNetworkPlugins/group/kindnet/KubeletFlags 0.33
359 TestNetworkPlugins/group/kindnet/NetCatPod 9.26
360 TestNetworkPlugins/group/kindnet/DNS 0.17
361 TestNetworkPlugins/group/kindnet/Localhost 0.14
362 TestNetworkPlugins/group/kindnet/HairPin 0.12
363 TestNetworkPlugins/group/enable-default-cni/Start 67.62
364 TestNetworkPlugins/group/calico/ControllerPod 6.01
365 TestNetworkPlugins/group/flannel/Start 48.32
366 TestNetworkPlugins/group/calico/KubeletFlags 0.35
367 TestNetworkPlugins/group/calico/NetCatPod 58.24
368 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
369 TestNetworkPlugins/group/custom-flannel/NetCatPod 8.22
370 TestNetworkPlugins/group/custom-flannel/DNS 0.16
371 TestNetworkPlugins/group/custom-flannel/Localhost 0.09
372 TestNetworkPlugins/group/custom-flannel/HairPin 0.09
373 TestNetworkPlugins/group/bridge/Start 67.25
374 TestNetworkPlugins/group/flannel/ControllerPod 6.01
375 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.34
376 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.43
377 TestNetworkPlugins/group/flannel/KubeletFlags 0.29
378 TestNetworkPlugins/group/flannel/NetCatPod 9.22
379 TestNetworkPlugins/group/enable-default-cni/DNS 0.12
380 TestNetworkPlugins/group/enable-default-cni/Localhost 0.09
381 TestNetworkPlugins/group/enable-default-cni/HairPin 0.1
382 TestNetworkPlugins/group/calico/DNS 0.13
383 TestNetworkPlugins/group/calico/Localhost 0.12
384 TestNetworkPlugins/group/calico/HairPin 0.11
385 TestNetworkPlugins/group/flannel/DNS 0.2
386 TestNetworkPlugins/group/flannel/Localhost 0.14
387 TestNetworkPlugins/group/flannel/HairPin 0.14
388 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
389 TestNetworkPlugins/group/bridge/NetCatPod 8.18
390 TestNetworkPlugins/group/bridge/DNS 0.11
391 TestNetworkPlugins/group/bridge/Localhost 0.09
392 TestNetworkPlugins/group/bridge/HairPin 0.09
x
+
TestDownloadOnly/v1.28.0/json-events (15.55s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-018429 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-018429 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (15.544636071s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (15.55s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1019 16:20:30.529297    7228 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1019 16:20:30.529402    7228 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-3731/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-018429
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-018429: exit status 85 (65.247042ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-018429 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-018429 │ jenkins │ v1.37.0 │ 19 Oct 25 16:20 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 16:20:15
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 16:20:15.029128    7256 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:20:15.029393    7256 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:20:15.029402    7256 out.go:374] Setting ErrFile to fd 2...
	I1019 16:20:15.029406    7256 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:20:15.029617    7256 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3731/.minikube/bin
	W1019 16:20:15.029744    7256 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21683-3731/.minikube/config/config.json: open /home/jenkins/minikube-integration/21683-3731/.minikube/config/config.json: no such file or directory
	I1019 16:20:15.030248    7256 out.go:368] Setting JSON to true
	I1019 16:20:15.031150    7256 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":161,"bootTime":1760890654,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 16:20:15.031249    7256 start.go:143] virtualization: kvm guest
	I1019 16:20:15.033894    7256 out.go:99] [download-only-018429] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1019 16:20:15.034061    7256 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21683-3731/.minikube/cache/preloaded-tarball: no such file or directory
	I1019 16:20:15.034107    7256 notify.go:221] Checking for updates...
	I1019 16:20:15.035952    7256 out.go:171] MINIKUBE_LOCATION=21683
	I1019 16:20:15.038073    7256 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 16:20:15.039497    7256 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21683-3731/kubeconfig
	I1019 16:20:15.041035    7256 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-3731/.minikube
	I1019 16:20:15.042694    7256 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1019 16:20:15.045330    7256 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1019 16:20:15.045579    7256 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 16:20:15.069115    7256 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1019 16:20:15.069185    7256 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 16:20:15.538231    7256 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:64 SystemTime:2025-10-19 16:20:15.524771599 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 16:20:15.538388    7256 docker.go:319] overlay module found
	I1019 16:20:15.540095    7256 out.go:99] Using the docker driver based on user configuration
	I1019 16:20:15.540137    7256 start.go:309] selected driver: docker
	I1019 16:20:15.540146    7256 start.go:930] validating driver "docker" against <nil>
	I1019 16:20:15.540244    7256 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 16:20:15.607577    7256 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:64 SystemTime:2025-10-19 16:20:15.596259735 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 16:20:15.607847    7256 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1019 16:20:15.608477    7256 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1019 16:20:15.608692    7256 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1019 16:20:15.610401    7256 out.go:171] Using Docker driver with root privileges
	I1019 16:20:15.611794    7256 cni.go:84] Creating CNI manager for ""
	I1019 16:20:15.611881    7256 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 16:20:15.611895    7256 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1019 16:20:15.611961    7256 start.go:353] cluster config:
	{Name:download-only-018429 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-018429 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 16:20:15.613336    7256 out.go:99] Starting "download-only-018429" primary control-plane node in "download-only-018429" cluster
	I1019 16:20:15.613361    7256 cache.go:124] Beginning downloading kic base image for docker with crio
	I1019 16:20:15.614730    7256 out.go:99] Pulling base image v0.0.48-1760609789-21757 ...
	I1019 16:20:15.614780    7256 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1019 16:20:15.614888    7256 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 16:20:15.632536    7256 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1019 16:20:15.632749    7256 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory
	I1019 16:20:15.632901    7256 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1019 16:20:15.637741    7256 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1019 16:20:15.637769    7256 cache.go:59] Caching tarball of preloaded images
	I1019 16:20:15.637912    7256 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1019 16:20:15.639672    7256 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1019 16:20:15.639699    7256 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1019 16:20:15.678117    7256 preload.go:290] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1019 16:20:15.678266    7256 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21683-3731/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1019 16:20:18.665061    7256 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 as a tarball
	I1019 16:20:29.816926    7256 cache.go:62] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1019 16:20:29.817320    7256 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/download-only-018429/config.json ...
	I1019 16:20:29.817349    7256 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/download-only-018429/config.json: {Name:mkc587c2e20e9c8d578e5be2f13c3d46bed761aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:20:29.817505    7256 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1019 16:20:29.817672    7256 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/21683-3731/.minikube/cache/linux/amd64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-018429 host does not exist
	  To start a cluster, run: "minikube start -p download-only-018429"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-018429
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (4s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-870641 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-870641 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.003679631s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (4.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1019 16:20:34.965932    7228 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1019 16:20:34.965972    7228 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-3731/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-870641
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-870641: exit status 85 (64.195939ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-018429 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-018429 │ jenkins │ v1.37.0 │ 19 Oct 25 16:20 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 19 Oct 25 16:20 UTC │ 19 Oct 25 16:20 UTC │
	│ delete  │ -p download-only-018429                                                                                                                                                   │ download-only-018429 │ jenkins │ v1.37.0 │ 19 Oct 25 16:20 UTC │ 19 Oct 25 16:20 UTC │
	│ start   │ -o=json --download-only -p download-only-870641 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-870641 │ jenkins │ v1.37.0 │ 19 Oct 25 16:20 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 16:20:31
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 16:20:31.002669    7643 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:20:31.002924    7643 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:20:31.002934    7643 out.go:374] Setting ErrFile to fd 2...
	I1019 16:20:31.002938    7643 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:20:31.003174    7643 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3731/.minikube/bin
	I1019 16:20:31.003630    7643 out.go:368] Setting JSON to true
	I1019 16:20:31.004467    7643 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":177,"bootTime":1760890654,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 16:20:31.004555    7643 start.go:143] virtualization: kvm guest
	I1019 16:20:31.006556    7643 out.go:99] [download-only-870641] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 16:20:31.006705    7643 notify.go:221] Checking for updates...
	I1019 16:20:31.008101    7643 out.go:171] MINIKUBE_LOCATION=21683
	I1019 16:20:31.009709    7643 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 16:20:31.011186    7643 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21683-3731/kubeconfig
	I1019 16:20:31.012465    7643 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-3731/.minikube
	I1019 16:20:31.013841    7643 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1019 16:20:31.016423    7643 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1019 16:20:31.016734    7643 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 16:20:31.043656    7643 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1019 16:20:31.043735    7643 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 16:20:31.102400    7643 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-10-19 16:20:31.092687302 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 16:20:31.102503    7643 docker.go:319] overlay module found
	I1019 16:20:31.103974    7643 out.go:99] Using the docker driver based on user configuration
	I1019 16:20:31.104013    7643 start.go:309] selected driver: docker
	I1019 16:20:31.104019    7643 start.go:930] validating driver "docker" against <nil>
	I1019 16:20:31.104151    7643 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 16:20:31.161855    7643 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-10-19 16:20:31.151968716 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 16:20:31.162005    7643 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1019 16:20:31.162500    7643 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1019 16:20:31.162638    7643 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1019 16:20:31.164534    7643 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-870641 host does not exist
	  To start a cluster, run: "minikube start -p download-only-870641"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-870641
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.41s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-110482 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-110482" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-110482
--- PASS: TestDownloadOnlyKic (0.41s)

                                                
                                    
x
+
TestBinaryMirror (0.82s)

                                                
                                                
=== RUN   TestBinaryMirror
I1019 16:20:36.081524    7228 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-444864 --alsologtostderr --binary-mirror http://127.0.0.1:37593 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-444864" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-444864
--- PASS: TestBinaryMirror (0.82s)

                                                
                                    
x
+
TestOffline (57.86s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-945920 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-945920 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (55.283262484s)
helpers_test.go:175: Cleaning up "offline-crio-945920" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-945920
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-945920: (2.569267685s)
--- PASS: TestOffline (57.86s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-557770
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-557770: exit status 85 (54.700438ms)

                                                
                                                
-- stdout --
	* Profile "addons-557770" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-557770"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-557770
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-557770: exit status 85 (53.711992ms)

                                                
                                                
-- stdout --
	* Profile "addons-557770" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-557770"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (131.56s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-557770 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-557770 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m11.55805755s)
--- PASS: TestAddons/Setup (131.56s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-557770 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-557770 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (7.46s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-557770 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-557770 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [79a54fb4-2085-4e70-bc23-ee183a0b45cd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [79a54fb4-2085-4e70-bc23-ee183a0b45cd] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 7.003836336s
addons_test.go:694: (dbg) Run:  kubectl --context addons-557770 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-557770 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-557770 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (7.46s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (18.54s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-557770
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-557770: (18.282853409s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-557770
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-557770
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-557770
--- PASS: TestAddons/StoppedEnableDisable (18.54s)

                                                
                                    
x
+
TestCertOptions (33.64s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-639932 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-639932 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (30.455434227s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-639932 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-639932 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-639932 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-639932" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-639932
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-639932: (2.509765553s)
--- PASS: TestCertOptions (33.64s)

                                                
                                    
x
+
TestCertExpiration (217.49s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-132648 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-132648 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (27.587658138s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-132648 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-132648 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (6.859643954s)
helpers_test.go:175: Cleaning up "cert-expiration-132648" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-132648
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-132648: (3.039082926s)
--- PASS: TestCertExpiration (217.49s)

                                                
                                    
x
+
TestForceSystemdFlag (35.18s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-121655 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-121655 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (32.366297314s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-121655 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-121655" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-121655
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-121655: (2.513703622s)
--- PASS: TestForceSystemdFlag (35.18s)

                                                
                                    
x
+
TestForceSystemdEnv (31.25s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-118963 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-118963 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (27.951753933s)
helpers_test.go:175: Cleaning up "force-systemd-env-118963" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-118963
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-118963: (3.299907627s)
--- PASS: TestForceSystemdEnv (31.25s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.17s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1019 17:11:30.196875    7228 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1019 17:11:30.197040    7228 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate2446589675/001:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1019 17:11:30.235658    7228 install.go:163] /tmp/TestKVMDriverInstallOrUpdate2446589675/001/docker-machine-driver-kvm2 version is 1.1.1
W1019 17:11:30.235728    7228 install.go:76] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.37.0
W1019 17:11:30.235873    7228 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1019 17:11:30.235931    7228 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2446589675/001/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (1.17s)

                                                
                                    
x
+
TestErrorSpam/setup (20.54s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-235302 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-235302 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-235302 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-235302 --driver=docker  --container-runtime=crio: (20.539689103s)
--- PASS: TestErrorSpam/setup (20.54s)

                                                
                                    
x
+
TestErrorSpam/start (0.63s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-235302 --log_dir /tmp/nospam-235302 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-235302 --log_dir /tmp/nospam-235302 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-235302 --log_dir /tmp/nospam-235302 start --dry-run
--- PASS: TestErrorSpam/start (0.63s)

                                                
                                    
x
+
TestErrorSpam/status (0.91s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-235302 --log_dir /tmp/nospam-235302 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-235302 --log_dir /tmp/nospam-235302 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-235302 --log_dir /tmp/nospam-235302 status
--- PASS: TestErrorSpam/status (0.91s)

                                                
                                    
x
+
TestErrorSpam/pause (6.65s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-235302 --log_dir /tmp/nospam-235302 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-235302 --log_dir /tmp/nospam-235302 pause: exit status 80 (2.17481097s)

                                                
                                                
-- stdout --
	* Pausing node nospam-235302 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:26:23Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-235302 --log_dir /tmp/nospam-235302 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-235302 --log_dir /tmp/nospam-235302 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-235302 --log_dir /tmp/nospam-235302 pause: exit status 80 (2.077214707s)

                                                
                                                
-- stdout --
	* Pausing node nospam-235302 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:26:25Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-235302 --log_dir /tmp/nospam-235302 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-235302 --log_dir /tmp/nospam-235302 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-235302 --log_dir /tmp/nospam-235302 pause: exit status 80 (2.400453686s)

                                                
                                                
-- stdout --
	* Pausing node nospam-235302 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:26:27Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-235302 --log_dir /tmp/nospam-235302 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.65s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.42s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-235302 --log_dir /tmp/nospam-235302 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-235302 --log_dir /tmp/nospam-235302 unpause: exit status 80 (2.222568405s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-235302 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:26:29Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-235302 --log_dir /tmp/nospam-235302 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-235302 --log_dir /tmp/nospam-235302 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-235302 --log_dir /tmp/nospam-235302 unpause: exit status 80 (1.85792103s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-235302 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:26:31Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-235302 --log_dir /tmp/nospam-235302 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-235302 --log_dir /tmp/nospam-235302 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-235302 --log_dir /tmp/nospam-235302 unpause: exit status 80 (1.340068965s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-235302 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:26:33Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-235302 --log_dir /tmp/nospam-235302 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.42s)

                                                
                                    
x
+
TestErrorSpam/stop (2.59s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-235302 --log_dir /tmp/nospam-235302 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-235302 --log_dir /tmp/nospam-235302 stop: (2.404004765s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-235302 --log_dir /tmp/nospam-235302 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-235302 --log_dir /tmp/nospam-235302 stop
--- PASS: TestErrorSpam/stop (2.59s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21683-3731/.minikube/files/etc/test/nested/copy/7228/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (71.43s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-507544 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1019 16:27:49.095192    7228 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:27:49.108462    7228 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:27:49.119870    7228 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:27:49.141304    7228 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:27:49.182701    7228 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:27:49.264182    7228 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:27:49.425730    7228 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:27:49.747266    7228 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:27:50.389353    7228 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-507544 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m11.427324187s)
--- PASS: TestFunctional/serial/StartWithProxy (71.43s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.26s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1019 16:27:51.400864    7228 config.go:182] Loaded profile config "functional-507544": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-507544 --alsologtostderr -v=8
E1019 16:27:51.670764    7228 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:27:54.232751    7228 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-507544 --alsologtostderr -v=8: (6.260040837s)
functional_test.go:678: soft start took 6.261005207s for "functional-507544" cluster.
I1019 16:27:57.661379    7228 config.go:182] Loaded profile config "functional-507544": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (6.26s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-507544 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.87s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 cache add registry.k8s.io/pause:3.3
E1019 16:27:59.354289    7228 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-507544 cache add registry.k8s.io/pause:3.3: (1.064091228s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.87s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-507544 /tmp/TestFunctionalserialCacheCmdcacheadd_local1362796488/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 cache add minikube-local-cache-test:functional-507544
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 cache delete minikube-local-cache-test:functional-507544
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-507544
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.53s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-507544 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (282.543426ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.53s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 kubectl -- --context functional-507544 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-507544 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (67.88s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-507544 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1019 16:28:09.596658    7228 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:28:30.078672    7228 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:29:11.041250    7228 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-507544 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m7.882002813s)
functional_test.go:776: restart took 1m7.882127527s for "functional-507544" cluster.
I1019 16:29:11.939369    7228 config.go:182] Loaded profile config "functional-507544": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (67.88s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-507544 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.27s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-507544 logs: (1.269048889s)
--- PASS: TestFunctional/serial/LogsCmd (1.27s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.28s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 logs --file /tmp/TestFunctionalserialLogsFileCmd788051256/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-507544 logs --file /tmp/TestFunctionalserialLogsFileCmd788051256/001/logs.txt: (1.278626516s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.28s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.78s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-507544 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-507544
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-507544: exit status 115 (348.738323ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32268 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-507544 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.78s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-507544 config get cpus: exit status 14 (76.214653ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-507544 config get cpus: exit status 14 (49.05475ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-507544 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-507544 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (175.95448ms)

                                                
                                                
-- stdout --
	* [functional-507544] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-3731/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-3731/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 16:29:29.623166   43587 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:29:29.623423   43587 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:29:29.623432   43587 out.go:374] Setting ErrFile to fd 2...
	I1019 16:29:29.623435   43587 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:29:29.623679   43587 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3731/.minikube/bin
	I1019 16:29:29.624281   43587 out.go:368] Setting JSON to false
	I1019 16:29:29.625272   43587 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":716,"bootTime":1760890654,"procs":237,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 16:29:29.625362   43587 start.go:143] virtualization: kvm guest
	I1019 16:29:29.627415   43587 out.go:179] * [functional-507544] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 16:29:29.629039   43587 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 16:29:29.629089   43587 notify.go:221] Checking for updates...
	I1019 16:29:29.631362   43587 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 16:29:29.632701   43587 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-3731/kubeconfig
	I1019 16:29:29.634042   43587 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-3731/.minikube
	I1019 16:29:29.635411   43587 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 16:29:29.636670   43587 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 16:29:29.638378   43587 config.go:182] Loaded profile config "functional-507544": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:29:29.638812   43587 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 16:29:29.663547   43587 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1019 16:29:29.663678   43587 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 16:29:29.739040   43587 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-19 16:29:29.723687468 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 16:29:29.739203   43587 docker.go:319] overlay module found
	I1019 16:29:29.742883   43587 out.go:179] * Using the docker driver based on existing profile
	I1019 16:29:29.745267   43587 start.go:309] selected driver: docker
	I1019 16:29:29.745293   43587 start.go:930] validating driver "docker" against &{Name:functional-507544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-507544 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 16:29:29.745397   43587 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 16:29:29.747334   43587 out.go:203] 
	W1019 16:29:29.748550   43587 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1019 16:29:29.749825   43587 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-507544 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-507544 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-507544 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (162.345331ms)

                                                
                                                
-- stdout --
	* [functional-507544] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-3731/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-3731/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 16:29:30.118385   44149 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:29:30.118477   44149 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:29:30.118484   44149 out.go:374] Setting ErrFile to fd 2...
	I1019 16:29:30.118488   44149 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:29:30.118804   44149 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3731/.minikube/bin
	I1019 16:29:30.119280   44149 out.go:368] Setting JSON to false
	I1019 16:29:30.120292   44149 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":716,"bootTime":1760890654,"procs":246,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 16:29:30.120370   44149 start.go:143] virtualization: kvm guest
	I1019 16:29:30.122096   44149 out.go:179] * [functional-507544] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1019 16:29:30.123412   44149 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 16:29:30.123412   44149 notify.go:221] Checking for updates...
	I1019 16:29:30.124663   44149 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 16:29:30.125918   44149 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-3731/kubeconfig
	I1019 16:29:30.127440   44149 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-3731/.minikube
	I1019 16:29:30.128697   44149 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 16:29:30.130309   44149 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 16:29:30.131905   44149 config.go:182] Loaded profile config "functional-507544": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:29:30.132380   44149 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 16:29:30.156613   44149 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1019 16:29:30.156707   44149 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 16:29:30.214136   44149 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-19 16:29:30.20345749 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 16:29:30.214295   44149 docker.go:319] overlay module found
	I1019 16:29:30.216279   44149 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1019 16:29:30.217643   44149 start.go:309] selected driver: docker
	I1019 16:29:30.217661   44149 start.go:930] validating driver "docker" against &{Name:functional-507544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-507544 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 16:29:30.217741   44149 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 16:29:30.219580   44149 out.go:203] 
	W1019 16:29:30.220824   44149 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1019 16:29:30.221973   44149 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 ssh -n functional-507544 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 cp functional-507544:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2156499319/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 ssh -n functional-507544 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 ssh -n functional-507544 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.86s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/7228/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 ssh "sudo cat /etc/test/nested/copy/7228/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/7228.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 ssh "sudo cat /etc/ssl/certs/7228.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/7228.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 ssh "sudo cat /usr/share/ca-certificates/7228.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/72282.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 ssh "sudo cat /etc/ssl/certs/72282.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/72282.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 ssh "sudo cat /usr/share/ca-certificates/72282.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-507544 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-507544 ssh "sudo systemctl is-active docker": exit status 1 (270.027448ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-507544 ssh "sudo systemctl is-active containerd": exit status 1 (285.116747ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "385.834229ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "60.281139ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "397.435397ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "54.797621ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-507544 /tmp/TestFunctionalparallelMountCmdany-port1709824343/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1760891359793440286" to /tmp/TestFunctionalparallelMountCmdany-port1709824343/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1760891359793440286" to /tmp/TestFunctionalparallelMountCmdany-port1709824343/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1760891359793440286" to /tmp/TestFunctionalparallelMountCmdany-port1709824343/001/test-1760891359793440286
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-507544 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (320.414493ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1019 16:29:20.114220    7228 retry.go:31] will retry after 358.121247ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 19 16:29 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 19 16:29 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 19 16:29 test-1760891359793440286
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 ssh cat /mount-9p/test-1760891359793440286
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-507544 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [45fd8a36-2b57-4c92-a00d-9bd735314e48] Pending
helpers_test.go:352: "busybox-mount" [45fd8a36-2b57-4c92-a00d-9bd735314e48] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [45fd8a36-2b57-4c92-a00d-9bd735314e48] Running
helpers_test.go:352: "busybox-mount" [45fd8a36-2b57-4c92-a00d-9bd735314e48] Running / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
I1019 16:29:25.517837    7228 detect.go:223] nested VM detected
helpers_test.go:352: "busybox-mount" [45fd8a36-2b57-4c92-a00d-9bd735314e48] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003636004s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-507544 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-507544 /tmp/TestFunctionalparallelMountCmdany-port1709824343/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.87s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-507544 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-507544 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-507544 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-507544 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 41591: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-507544 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-507544 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [cd6796dd-981b-4f6d-846b-748d22b3701c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [cd6796dd-981b-4f6d-846b-748d22b3701c] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.004412119s
I1019 16:29:29.401400    7228 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.21s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-507544 /tmp/TestFunctionalparallelMountCmdspecific-port3059600552/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-507544 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (322.435213ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1019 16:29:27.988812    7228 retry.go:31] will retry after 687.78549ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-507544 /tmp/TestFunctionalparallelMountCmdspecific-port3059600552/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-507544 ssh "sudo umount -f /mount-9p": exit status 1 (275.519341ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-507544 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-507544 /tmp/TestFunctionalparallelMountCmdspecific-port3059600552/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-507544 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.98.23.83 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-507544 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-507544 /tmp/TestFunctionalparallelMountCmdVerifyCleanup997216923/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-507544 /tmp/TestFunctionalparallelMountCmdVerifyCleanup997216923/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-507544 /tmp/TestFunctionalparallelMountCmdVerifyCleanup997216923/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-507544 ssh "findmnt -T" /mount1: exit status 1 (367.019009ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1019 16:29:30.061106    7228 retry.go:31] will retry after 340.832576ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-507544 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-507544 /tmp/TestFunctionalparallelMountCmdVerifyCleanup997216923/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-507544 /tmp/TestFunctionalparallelMountCmdVerifyCleanup997216923/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-507544 /tmp/TestFunctionalparallelMountCmdVerifyCleanup997216923/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-507544 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-507544 image ls --format short --alsologtostderr:
I1019 16:35:39.132801   50079 out.go:360] Setting OutFile to fd 1 ...
I1019 16:35:39.133082   50079 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 16:35:39.133093   50079 out.go:374] Setting ErrFile to fd 2...
I1019 16:35:39.133099   50079 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 16:35:39.133343   50079 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3731/.minikube/bin
I1019 16:35:39.133937   50079 config.go:182] Loaded profile config "functional-507544": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1019 16:35:39.134057   50079 config.go:182] Loaded profile config "functional-507544": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1019 16:35:39.134496   50079 cli_runner.go:164] Run: docker container inspect functional-507544 --format={{.State.Status}}
I1019 16:35:39.153300   50079 ssh_runner.go:195] Run: systemctl --version
I1019 16:35:39.153358   50079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-507544
I1019 16:35:39.173662   50079 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/functional-507544/id_rsa Username:docker}
I1019 16:35:39.269988   50079 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-507544 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ docker.io/library/nginx                 │ latest             │ 07ccdb7838758 │ 164MB  │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ docker.io/library/nginx                 │ alpine             │ 5e7abcdd20216 │ 54.2MB │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-507544 image ls --format table --alsologtostderr:
I1019 16:35:39.570337   50187 out.go:360] Setting OutFile to fd 1 ...
I1019 16:35:39.570589   50187 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 16:35:39.570597   50187 out.go:374] Setting ErrFile to fd 2...
I1019 16:35:39.570601   50187 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 16:35:39.570789   50187 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3731/.minikube/bin
I1019 16:35:39.571375   50187 config.go:182] Loaded profile config "functional-507544": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1019 16:35:39.571465   50187 config.go:182] Loaded profile config "functional-507544": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1019 16:35:39.571843   50187 cli_runner.go:164] Run: docker container inspect functional-507544 --format={{.State.Status}}
I1019 16:35:39.592130   50187 ssh_runner.go:195] Run: systemctl --version
I1019 16:35:39.592191   50187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-507544
I1019 16:35:39.610812   50187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/functional-507544/id_rsa Username:docker}
I1019 16:35:39.708378   50187 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-507544 image ls --format json --alsologtostderr:
[{"id":"5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5","repoDigests":["docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22","docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54168570"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9c
f7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"07ccdb7838758e758a4d52a9761636c385125a327355c0c94a6acff9babff938","repoDigests":["docker.io/library/nginx@sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115","docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6"],"repoTags":["docker.io/library/nginx:latest"],"size":"163615579"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c
5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab
552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed6
47b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],
"size":"4631262"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-507544 image ls --format json --alsologtostderr:
I1019 16:35:39.351984   50134 out.go:360] Setting OutFile to fd 1 ...
I1019 16:35:39.352138   50134 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 16:35:39.352148   50134 out.go:374] Setting ErrFile to fd 2...
I1019 16:35:39.352152   50134 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 16:35:39.352355   50134 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3731/.minikube/bin
I1019 16:35:39.352944   50134 config.go:182] Loaded profile config "functional-507544": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1019 16:35:39.353037   50134 config.go:182] Loaded profile config "functional-507544": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1019 16:35:39.353417   50134 cli_runner.go:164] Run: docker container inspect functional-507544 --format={{.State.Status}}
I1019 16:35:39.372409   50134 ssh_runner.go:195] Run: systemctl --version
I1019 16:35:39.372456   50134 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-507544
I1019 16:35:39.391725   50134 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/functional-507544/id_rsa Username:docker}
I1019 16:35:39.489553   50134 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-507544 image ls --format yaml --alsologtostderr:
- id: 5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5
repoDigests:
- docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22
- docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e
repoTags:
- docker.io/library/nginx:alpine
size: "54168570"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 07ccdb7838758e758a4d52a9761636c385125a327355c0c94a6acff9babff938
repoDigests:
- docker.io/library/nginx@sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115
- docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6
repoTags:
- docker.io/library/nginx:latest
size: "163615579"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-507544 image ls --format yaml --alsologtostderr:
I1019 16:35:39.788045   50239 out.go:360] Setting OutFile to fd 1 ...
I1019 16:35:39.788368   50239 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 16:35:39.788379   50239 out.go:374] Setting ErrFile to fd 2...
I1019 16:35:39.788383   50239 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 16:35:39.788775   50239 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3731/.minikube/bin
I1019 16:35:39.789457   50239 config.go:182] Loaded profile config "functional-507544": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1019 16:35:39.789567   50239 config.go:182] Loaded profile config "functional-507544": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1019 16:35:39.789969   50239 cli_runner.go:164] Run: docker container inspect functional-507544 --format={{.State.Status}}
I1019 16:35:39.808888   50239 ssh_runner.go:195] Run: systemctl --version
I1019 16:35:39.808942   50239 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-507544
I1019 16:35:39.828192   50239 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/functional-507544/id_rsa Username:docker}
I1019 16:35:39.925160   50239 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-507544 ssh pgrep buildkitd: exit status 1 (265.347814ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 image build -t localhost/my-image:functional-507544 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-507544 image build -t localhost/my-image:functional-507544 testdata/build --alsologtostderr: (1.767378121s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-507544 image build -t localhost/my-image:functional-507544 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 107ee046cf7
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-507544
--> 5cd493413d4
Successfully tagged localhost/my-image:functional-507544
5cd493413d47736d218f96a13fc5467c72c2bf710f58673304898c6181081970
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-507544 image build -t localhost/my-image:functional-507544 testdata/build --alsologtostderr:
I1019 16:35:40.275013   50401 out.go:360] Setting OutFile to fd 1 ...
I1019 16:35:40.275368   50401 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 16:35:40.275379   50401 out.go:374] Setting ErrFile to fd 2...
I1019 16:35:40.275383   50401 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 16:35:40.275576   50401 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3731/.minikube/bin
I1019 16:35:40.276193   50401 config.go:182] Loaded profile config "functional-507544": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1019 16:35:40.276889   50401 config.go:182] Loaded profile config "functional-507544": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1019 16:35:40.277317   50401 cli_runner.go:164] Run: docker container inspect functional-507544 --format={{.State.Status}}
I1019 16:35:40.297934   50401 ssh_runner.go:195] Run: systemctl --version
I1019 16:35:40.297999   50401 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-507544
I1019 16:35:40.316462   50401 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/functional-507544/id_rsa Username:docker}
I1019 16:35:40.413035   50401 build_images.go:162] Building image from path: /tmp/build.3181837211.tar
I1019 16:35:40.413131   50401 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1019 16:35:40.421975   50401 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3181837211.tar
I1019 16:35:40.425943   50401 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3181837211.tar: stat -c "%s %y" /var/lib/minikube/build/build.3181837211.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3181837211.tar': No such file or directory
I1019 16:35:40.425975   50401 ssh_runner.go:362] scp /tmp/build.3181837211.tar --> /var/lib/minikube/build/build.3181837211.tar (3072 bytes)
I1019 16:35:40.444337   50401 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3181837211
I1019 16:35:40.452652   50401 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3181837211 -xf /var/lib/minikube/build/build.3181837211.tar
I1019 16:35:40.461758   50401 crio.go:315] Building image: /var/lib/minikube/build/build.3181837211
I1019 16:35:40.461873   50401 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-507544 /var/lib/minikube/build/build.3181837211 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1019 16:35:41.968901   50401 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-507544 /var/lib/minikube/build/build.3181837211 --cgroup-manager=cgroupfs: (1.506995493s)
I1019 16:35:41.968978   50401 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3181837211
I1019 16:35:41.977836   50401 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3181837211.tar
I1019 16:35:41.986250   50401 build_images.go:218] Built localhost/my-image:functional-507544 from /tmp/build.3181837211.tar
I1019 16:35:41.986291   50401 build_images.go:134] succeeded building to: functional-507544
I1019 16:35:41.986298   50401 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 image ls
E1019 16:37:49.094262    7228 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-507544
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 image rm kicbase/echo-server:functional-507544 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-507544 service list: (1.708496234s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-507544 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-507544 service list -o json: (1.695011294s)
functional_test.go:1504: Took "1.695126049s" to run "out/minikube-linux-amd64 -p functional-507544 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.70s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-507544
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-507544
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-507544
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (110.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-475866 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m49.992827598s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (110.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-475866 kubectl -- rollout status deployment/busybox: (3.346399912s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 kubectl -- exec busybox-7b57f96db7-47tzw -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 kubectl -- exec busybox-7b57f96db7-h2dxx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 kubectl -- exec busybox-7b57f96db7-zptqx -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 kubectl -- exec busybox-7b57f96db7-47tzw -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 kubectl -- exec busybox-7b57f96db7-h2dxx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 kubectl -- exec busybox-7b57f96db7-zptqx -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 kubectl -- exec busybox-7b57f96db7-47tzw -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 kubectl -- exec busybox-7b57f96db7-h2dxx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 kubectl -- exec busybox-7b57f96db7-zptqx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 kubectl -- exec busybox-7b57f96db7-47tzw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 kubectl -- exec busybox-7b57f96db7-47tzw -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 kubectl -- exec busybox-7b57f96db7-h2dxx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 kubectl -- exec busybox-7b57f96db7-h2dxx -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 kubectl -- exec busybox-7b57f96db7-zptqx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 kubectl -- exec busybox-7b57f96db7-zptqx -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (27.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-475866 node add --alsologtostderr -v 5: (26.34993828s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (27.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-475866 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 cp testdata/cp-test.txt ha-475866:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 ssh -n ha-475866 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 cp ha-475866:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile334709308/001/cp-test_ha-475866.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 ssh -n ha-475866 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 cp ha-475866:/home/docker/cp-test.txt ha-475866-m02:/home/docker/cp-test_ha-475866_ha-475866-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 ssh -n ha-475866 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 ssh -n ha-475866-m02 "sudo cat /home/docker/cp-test_ha-475866_ha-475866-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 cp ha-475866:/home/docker/cp-test.txt ha-475866-m03:/home/docker/cp-test_ha-475866_ha-475866-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 ssh -n ha-475866 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 ssh -n ha-475866-m03 "sudo cat /home/docker/cp-test_ha-475866_ha-475866-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 cp ha-475866:/home/docker/cp-test.txt ha-475866-m04:/home/docker/cp-test_ha-475866_ha-475866-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 ssh -n ha-475866 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 ssh -n ha-475866-m04 "sudo cat /home/docker/cp-test_ha-475866_ha-475866-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 cp testdata/cp-test.txt ha-475866-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 ssh -n ha-475866-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 cp ha-475866-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile334709308/001/cp-test_ha-475866-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 ssh -n ha-475866-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 cp ha-475866-m02:/home/docker/cp-test.txt ha-475866:/home/docker/cp-test_ha-475866-m02_ha-475866.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 ssh -n ha-475866-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 ssh -n ha-475866 "sudo cat /home/docker/cp-test_ha-475866-m02_ha-475866.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 cp ha-475866-m02:/home/docker/cp-test.txt ha-475866-m03:/home/docker/cp-test_ha-475866-m02_ha-475866-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 ssh -n ha-475866-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 ssh -n ha-475866-m03 "sudo cat /home/docker/cp-test_ha-475866-m02_ha-475866-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 cp ha-475866-m02:/home/docker/cp-test.txt ha-475866-m04:/home/docker/cp-test_ha-475866-m02_ha-475866-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 ssh -n ha-475866-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 ssh -n ha-475866-m04 "sudo cat /home/docker/cp-test_ha-475866-m02_ha-475866-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 cp testdata/cp-test.txt ha-475866-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 ssh -n ha-475866-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 cp ha-475866-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile334709308/001/cp-test_ha-475866-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 ssh -n ha-475866-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 cp ha-475866-m03:/home/docker/cp-test.txt ha-475866:/home/docker/cp-test_ha-475866-m03_ha-475866.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 ssh -n ha-475866-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 ssh -n ha-475866 "sudo cat /home/docker/cp-test_ha-475866-m03_ha-475866.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 cp ha-475866-m03:/home/docker/cp-test.txt ha-475866-m02:/home/docker/cp-test_ha-475866-m03_ha-475866-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 ssh -n ha-475866-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 ssh -n ha-475866-m02 "sudo cat /home/docker/cp-test_ha-475866-m03_ha-475866-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 cp ha-475866-m03:/home/docker/cp-test.txt ha-475866-m04:/home/docker/cp-test_ha-475866-m03_ha-475866-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 ssh -n ha-475866-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 ssh -n ha-475866-m04 "sudo cat /home/docker/cp-test_ha-475866-m03_ha-475866-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 cp testdata/cp-test.txt ha-475866-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 ssh -n ha-475866-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 cp ha-475866-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile334709308/001/cp-test_ha-475866-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 ssh -n ha-475866-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 cp ha-475866-m04:/home/docker/cp-test.txt ha-475866:/home/docker/cp-test_ha-475866-m04_ha-475866.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 ssh -n ha-475866-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 ssh -n ha-475866 "sudo cat /home/docker/cp-test_ha-475866-m04_ha-475866.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 cp ha-475866-m04:/home/docker/cp-test.txt ha-475866-m02:/home/docker/cp-test_ha-475866-m04_ha-475866-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 ssh -n ha-475866-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 ssh -n ha-475866-m02 "sudo cat /home/docker/cp-test_ha-475866-m04_ha-475866-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 cp ha-475866-m04:/home/docker/cp-test.txt ha-475866-m03:/home/docker/cp-test_ha-475866-m04_ha-475866-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 ssh -n ha-475866-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 ssh -n ha-475866-m03 "sudo cat /home/docker/cp-test_ha-475866-m04_ha-475866-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (19.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-475866 node stop m02 --alsologtostderr -v 5: (18.54082306s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-475866 status --alsologtostderr -v 5: exit status 7 (684.0184ms)

                                                
                                                
-- stdout --
	ha-475866
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-475866-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-475866-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-475866-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 16:47:41.405173   75194 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:47:41.405483   75194 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:47:41.405494   75194 out.go:374] Setting ErrFile to fd 2...
	I1019 16:47:41.405499   75194 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:47:41.405761   75194 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3731/.minikube/bin
	I1019 16:47:41.405944   75194 out.go:368] Setting JSON to false
	I1019 16:47:41.405977   75194 mustload.go:66] Loading cluster: ha-475866
	I1019 16:47:41.406100   75194 notify.go:221] Checking for updates...
	I1019 16:47:41.406407   75194 config.go:182] Loaded profile config "ha-475866": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:47:41.406424   75194 status.go:174] checking status of ha-475866 ...
	I1019 16:47:41.406867   75194 cli_runner.go:164] Run: docker container inspect ha-475866 --format={{.State.Status}}
	I1019 16:47:41.425976   75194 status.go:371] ha-475866 host status = "Running" (err=<nil>)
	I1019 16:47:41.426017   75194 host.go:66] Checking if "ha-475866" exists ...
	I1019 16:47:41.426334   75194 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-475866
	I1019 16:47:41.444512   75194 host.go:66] Checking if "ha-475866" exists ...
	I1019 16:47:41.444883   75194 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 16:47:41.444940   75194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-475866
	I1019 16:47:41.465777   75194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/ha-475866/id_rsa Username:docker}
	I1019 16:47:41.559793   75194 ssh_runner.go:195] Run: systemctl --version
	I1019 16:47:41.566419   75194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 16:47:41.578898   75194 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 16:47:41.636055   75194 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-19 16:47:41.625326014 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 16:47:41.636607   75194 kubeconfig.go:125] found "ha-475866" server: "https://192.168.49.254:8443"
	I1019 16:47:41.636640   75194 api_server.go:166] Checking apiserver status ...
	I1019 16:47:41.636688   75194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 16:47:41.648806   75194 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1251/cgroup
	W1019 16:47:41.658146   75194 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1251/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1019 16:47:41.658201   75194 ssh_runner.go:195] Run: ls
	I1019 16:47:41.662038   75194 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1019 16:47:41.667852   75194 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1019 16:47:41.667875   75194 status.go:463] ha-475866 apiserver status = Running (err=<nil>)
	I1019 16:47:41.667887   75194 status.go:176] ha-475866 status: &{Name:ha-475866 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1019 16:47:41.667926   75194 status.go:174] checking status of ha-475866-m02 ...
	I1019 16:47:41.668234   75194 cli_runner.go:164] Run: docker container inspect ha-475866-m02 --format={{.State.Status}}
	I1019 16:47:41.687446   75194 status.go:371] ha-475866-m02 host status = "Stopped" (err=<nil>)
	I1019 16:47:41.687474   75194 status.go:384] host is not running, skipping remaining checks
	I1019 16:47:41.687481   75194 status.go:176] ha-475866-m02 status: &{Name:ha-475866-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1019 16:47:41.687498   75194 status.go:174] checking status of ha-475866-m03 ...
	I1019 16:47:41.687761   75194 cli_runner.go:164] Run: docker container inspect ha-475866-m03 --format={{.State.Status}}
	I1019 16:47:41.706470   75194 status.go:371] ha-475866-m03 host status = "Running" (err=<nil>)
	I1019 16:47:41.706493   75194 host.go:66] Checking if "ha-475866-m03" exists ...
	I1019 16:47:41.706779   75194 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-475866-m03
	I1019 16:47:41.724167   75194 host.go:66] Checking if "ha-475866-m03" exists ...
	I1019 16:47:41.724402   75194 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 16:47:41.724445   75194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-475866-m03
	I1019 16:47:41.742245   75194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/ha-475866-m03/id_rsa Username:docker}
	I1019 16:47:41.836481   75194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 16:47:41.849119   75194 kubeconfig.go:125] found "ha-475866" server: "https://192.168.49.254:8443"
	I1019 16:47:41.849150   75194 api_server.go:166] Checking apiserver status ...
	I1019 16:47:41.849194   75194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 16:47:41.859745   75194 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1169/cgroup
	W1019 16:47:41.868212   75194 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1169/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1019 16:47:41.868257   75194 ssh_runner.go:195] Run: ls
	I1019 16:47:41.872125   75194 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1019 16:47:41.876554   75194 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1019 16:47:41.876577   75194 status.go:463] ha-475866-m03 apiserver status = Running (err=<nil>)
	I1019 16:47:41.876587   75194 status.go:176] ha-475866-m03 status: &{Name:ha-475866-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1019 16:47:41.876606   75194 status.go:174] checking status of ha-475866-m04 ...
	I1019 16:47:41.876846   75194 cli_runner.go:164] Run: docker container inspect ha-475866-m04 --format={{.State.Status}}
	I1019 16:47:41.895047   75194 status.go:371] ha-475866-m04 host status = "Running" (err=<nil>)
	I1019 16:47:41.895089   75194 host.go:66] Checking if "ha-475866-m04" exists ...
	I1019 16:47:41.895353   75194 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-475866-m04
	I1019 16:47:41.913566   75194 host.go:66] Checking if "ha-475866-m04" exists ...
	I1019 16:47:41.913814   75194 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 16:47:41.913871   75194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-475866-m04
	I1019 16:47:41.932610   75194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/ha-475866-m04/id_rsa Username:docker}
	I1019 16:47:42.027351   75194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 16:47:42.040861   75194 status.go:176] ha-475866-m04 status: &{Name:ha-475866-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (19.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (8.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 node start m02 --alsologtostderr -v 5
E1019 16:47:49.094797    7228 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-475866 node start m02 --alsologtostderr -v 5: (7.960644285s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (8.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (103.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-475866 stop --alsologtostderr -v 5: (44.215395s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 start --wait true --alsologtostderr -v 5
E1019 16:49:18.530336    7228 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/functional-507544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:49:18.536806    7228 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/functional-507544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:49:18.548266    7228 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/functional-507544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:49:18.569772    7228 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/functional-507544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:49:18.611243    7228 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/functional-507544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:49:18.692708    7228 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/functional-507544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:49:18.854328    7228 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/functional-507544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:49:19.176163    7228 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/functional-507544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:49:19.818205    7228 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/functional-507544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:49:21.100238    7228 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/functional-507544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:49:23.661851    7228 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/functional-507544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:49:28.784154    7228 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/functional-507544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-475866 start --wait true --alsologtostderr -v 5: (59.328660754s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (103.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 node delete m03 --alsologtostderr -v 5
E1019 16:49:39.025810    7228 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/functional-507544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-475866 node delete m03 --alsologtostderr -v 5: (9.772754365s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (41.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 stop --alsologtostderr -v 5
E1019 16:49:59.507441    7228 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/functional-507544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-475866 stop --alsologtostderr -v 5: (41.11383819s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-475866 status --alsologtostderr -v 5: exit status 7 (111.255526ms)

                                                
                                                
-- stdout --
	ha-475866
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-475866-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-475866-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 16:50:28.623760   89276 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:50:28.623878   89276 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:50:28.623887   89276 out.go:374] Setting ErrFile to fd 2...
	I1019 16:50:28.623893   89276 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:50:28.624151   89276 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3731/.minikube/bin
	I1019 16:50:28.624366   89276 out.go:368] Setting JSON to false
	I1019 16:50:28.624392   89276 mustload.go:66] Loading cluster: ha-475866
	I1019 16:50:28.624501   89276 notify.go:221] Checking for updates...
	I1019 16:50:28.624869   89276 config.go:182] Loaded profile config "ha-475866": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:50:28.624897   89276 status.go:174] checking status of ha-475866 ...
	I1019 16:50:28.625394   89276 cli_runner.go:164] Run: docker container inspect ha-475866 --format={{.State.Status}}
	I1019 16:50:28.646893   89276 status.go:371] ha-475866 host status = "Stopped" (err=<nil>)
	I1019 16:50:28.646916   89276 status.go:384] host is not running, skipping remaining checks
	I1019 16:50:28.646921   89276 status.go:176] ha-475866 status: &{Name:ha-475866 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1019 16:50:28.646950   89276 status.go:174] checking status of ha-475866-m02 ...
	I1019 16:50:28.647275   89276 cli_runner.go:164] Run: docker container inspect ha-475866-m02 --format={{.State.Status}}
	I1019 16:50:28.667581   89276 status.go:371] ha-475866-m02 host status = "Stopped" (err=<nil>)
	I1019 16:50:28.667604   89276 status.go:384] host is not running, skipping remaining checks
	I1019 16:50:28.667611   89276 status.go:176] ha-475866-m02 status: &{Name:ha-475866-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1019 16:50:28.667630   89276 status.go:174] checking status of ha-475866-m04 ...
	I1019 16:50:28.667880   89276 cli_runner.go:164] Run: docker container inspect ha-475866-m04 --format={{.State.Status}}
	I1019 16:50:28.686255   89276 status.go:371] ha-475866-m04 host status = "Stopped" (err=<nil>)
	I1019 16:50:28.686276   89276 status.go:384] host is not running, skipping remaining checks
	I1019 16:50:28.686282   89276 status.go:176] ha-475866-m04 status: &{Name:ha-475866-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (41.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (50.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1019 16:50:40.469345    7228 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/functional-507544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-475866 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (49.350224091s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (50.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (35.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-475866 node add --control-plane --alsologtostderr -v 5: (34.938127888s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-475866 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (35.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.91s)

                                                
                                    
x
+
TestJSONOutput/start/Command (40.55s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-663592 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-663592 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (40.551853861s)
--- PASS: TestJSONOutput/start/Command (40.55s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-663592 --output=json --user=testUser
E1019 16:52:49.094540    7228 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-663592 --output=json --user=testUser: (8.003077545s)
--- PASS: TestJSONOutput/stop/Command (8.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-994732 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-994732 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (69.254366ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"81ae6217-2dae-49a0-b6f0-81622a85e6aa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-994732] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7a8ea4e2-b2d8-48d1-8c71-a1e45eb08ea2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21683"}}
	{"specversion":"1.0","id":"ad43a53d-6f68-4e9b-901b-06ffc6de1e8e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3ac02f11-46df-44da-8ba0-bc32d6b73e9b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21683-3731/kubeconfig"}}
	{"specversion":"1.0","id":"438e4d86-1e1e-42db-830b-f43862daea6a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-3731/.minikube"}}
	{"specversion":"1.0","id":"bcbb8a92-3aba-4b3f-b68c-c1702822eb29","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"dd2f0583-fb65-446a-8662-9c4117c11f32","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e0a940b6-da92-4b66-8279-019b00f3a69d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-994732" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-994732
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (29.06s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-861758 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-861758 --network=: (26.897373445s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-861758" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-861758
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-861758: (2.1461104s)
--- PASS: TestKicCustomNetwork/create_custom_network (29.06s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (25.38s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-756731 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-756731 --network=bridge: (23.385874646s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-756731" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-756731
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-756731: (1.973849425s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (25.38s)

                                                
                                    
x
+
TestKicExistingNetwork (25.77s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1019 16:53:56.465395    7228 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1019 16:53:56.482569    7228 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1019 16:53:56.482684    7228 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1019 16:53:56.482714    7228 cli_runner.go:164] Run: docker network inspect existing-network
W1019 16:53:56.500955    7228 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1019 16:53:56.500988    7228 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1019 16:53:56.501006    7228 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1019 16:53:56.501198    7228 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1019 16:53:56.519864    7228 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-96cf7041f267 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:7a:ea:91:e3:37:25} reservation:<nil>}
I1019 16:53:56.520320    7228 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a6060}
I1019 16:53:56.520354    7228 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1019 16:53:56.520407    7228 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1019 16:53:56.580432    7228 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-453286 --network=existing-network
E1019 16:54:18.538297    7228 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/functional-507544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-453286 --network=existing-network: (23.631647706s)
helpers_test.go:175: Cleaning up "existing-network-453286" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-453286
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-453286: (1.985771473s)
I1019 16:54:22.215524    7228 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (25.77s)

                                                
                                    
x
+
TestKicCustomSubnet (24.46s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-771368 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-771368 --subnet=192.168.60.0/24: (22.277424304s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-771368 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-771368" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-771368
E1019 16:54:46.232878    7228 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/functional-507544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-771368: (2.157908706s)
--- PASS: TestKicCustomSubnet (24.46s)

                                                
                                    
x
+
TestKicStaticIP (24.97s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-015885 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-015885 --static-ip=192.168.200.200: (22.682798843s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-015885 ip
helpers_test.go:175: Cleaning up "static-ip-015885" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-015885
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-015885: (2.150197595s)
--- PASS: TestKicStaticIP (24.97s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (49.61s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-588903 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-588903 --driver=docker  --container-runtime=crio: (20.751687246s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-591234 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-591234 --driver=docker  --container-runtime=crio: (22.889449425s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-588903
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-591234
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-591234" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-591234
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-591234: (2.387647102s)
helpers_test.go:175: Cleaning up "first-588903" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-588903
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-588903: (2.381734047s)
--- PASS: TestMinikubeProfile (49.61s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.89s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-796369 --memory=3072 --mount-string /tmp/TestMountStartserial2217951044/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-796369 --memory=3072 --mount-string /tmp/TestMountStartserial2217951044/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.891833269s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-796369 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.26s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-810933 --memory=3072 --mount-string /tmp/TestMountStartserial2217951044/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-810933 --memory=3072 --mount-string /tmp/TestMountStartserial2217951044/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.259569134s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.26s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-810933 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.73s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-796369 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-796369 --alsologtostderr -v=5: (1.732930108s)
--- PASS: TestMountStart/serial/DeleteFirst (1.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-810933 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-810933
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-810933: (1.257300358s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.29s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-810933
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-810933: (6.288083132s)
--- PASS: TestMountStart/serial/RestartStopped (7.29s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-810933 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (61.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-026920 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-026920 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m1.337603688s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026920 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (61.82s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026920 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026920 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-026920 -- rollout status deployment/busybox: (1.987315906s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026920 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026920 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026920 -- exec busybox-7b57f96db7-2pszj -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026920 -- exec busybox-7b57f96db7-7b76m -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026920 -- exec busybox-7b57f96db7-2pszj -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026920 -- exec busybox-7b57f96db7-7b76m -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026920 -- exec busybox-7b57f96db7-2pszj -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026920 -- exec busybox-7b57f96db7-7b76m -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.39s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026920 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026920 -- exec busybox-7b57f96db7-2pszj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026920 -- exec busybox-7b57f96db7-2pszj -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026920 -- exec busybox-7b57f96db7-7b76m -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026920 -- exec busybox-7b57f96db7-7b76m -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.67s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (23.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-026920 -v=5 --alsologtostderr
E1019 16:57:49.094280    7228 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-026920 -v=5 --alsologtostderr: (22.483317691s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026920 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (23.13s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-026920 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.66s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026920 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026920 cp testdata/cp-test.txt multinode-026920:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026920 ssh -n multinode-026920 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026920 cp multinode-026920:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3990325724/001/cp-test_multinode-026920.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026920 ssh -n multinode-026920 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026920 cp multinode-026920:/home/docker/cp-test.txt multinode-026920-m02:/home/docker/cp-test_multinode-026920_multinode-026920-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026920 ssh -n multinode-026920 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026920 ssh -n multinode-026920-m02 "sudo cat /home/docker/cp-test_multinode-026920_multinode-026920-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026920 cp multinode-026920:/home/docker/cp-test.txt multinode-026920-m03:/home/docker/cp-test_multinode-026920_multinode-026920-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026920 ssh -n multinode-026920 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026920 ssh -n multinode-026920-m03 "sudo cat /home/docker/cp-test_multinode-026920_multinode-026920-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026920 cp testdata/cp-test.txt multinode-026920-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026920 ssh -n multinode-026920-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026920 cp multinode-026920-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3990325724/001/cp-test_multinode-026920-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026920 ssh -n multinode-026920-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026920 cp multinode-026920-m02:/home/docker/cp-test.txt multinode-026920:/home/docker/cp-test_multinode-026920-m02_multinode-026920.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026920 ssh -n multinode-026920-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026920 ssh -n multinode-026920 "sudo cat /home/docker/cp-test_multinode-026920-m02_multinode-026920.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026920 cp multinode-026920-m02:/home/docker/cp-test.txt multinode-026920-m03:/home/docker/cp-test_multinode-026920-m02_multinode-026920-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026920 ssh -n multinode-026920-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026920 ssh -n multinode-026920-m03 "sudo cat /home/docker/cp-test_multinode-026920-m02_multinode-026920-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026920 cp testdata/cp-test.txt multinode-026920-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026920 ssh -n multinode-026920-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026920 cp multinode-026920-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3990325724/001/cp-test_multinode-026920-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026920 ssh -n multinode-026920-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026920 cp multinode-026920-m03:/home/docker/cp-test.txt multinode-026920:/home/docker/cp-test_multinode-026920-m03_multinode-026920.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026920 ssh -n multinode-026920-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026920 ssh -n multinode-026920 "sudo cat /home/docker/cp-test_multinode-026920-m03_multinode-026920.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026920 cp multinode-026920-m03:/home/docker/cp-test.txt multinode-026920-m02:/home/docker/cp-test_multinode-026920-m03_multinode-026920-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026920 ssh -n multinode-026920-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026920 ssh -n multinode-026920-m02 "sudo cat /home/docker/cp-test_multinode-026920-m03_multinode-026920-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.63s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026920 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-026920 node stop m03: (1.25621651s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026920 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-026920 status: exit status 7 (500.517227ms)

                                                
                                                
-- stdout --
	multinode-026920
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-026920-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-026920-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026920 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-026920 status --alsologtostderr: exit status 7 (495.924509ms)

                                                
                                                
-- stdout --
	multinode-026920
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-026920-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-026920-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 16:58:12.853160  148795 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:58:12.853423  148795 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:58:12.853434  148795 out.go:374] Setting ErrFile to fd 2...
	I1019 16:58:12.853438  148795 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:58:12.853666  148795 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3731/.minikube/bin
	I1019 16:58:12.853828  148795 out.go:368] Setting JSON to false
	I1019 16:58:12.853853  148795 mustload.go:66] Loading cluster: multinode-026920
	I1019 16:58:12.853901  148795 notify.go:221] Checking for updates...
	I1019 16:58:12.854427  148795 config.go:182] Loaded profile config "multinode-026920": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:58:12.854449  148795 status.go:174] checking status of multinode-026920 ...
	I1019 16:58:12.854964  148795 cli_runner.go:164] Run: docker container inspect multinode-026920 --format={{.State.Status}}
	I1019 16:58:12.874373  148795 status.go:371] multinode-026920 host status = "Running" (err=<nil>)
	I1019 16:58:12.874396  148795 host.go:66] Checking if "multinode-026920" exists ...
	I1019 16:58:12.874651  148795 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-026920
	I1019 16:58:12.893210  148795 host.go:66] Checking if "multinode-026920" exists ...
	I1019 16:58:12.893480  148795 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 16:58:12.893527  148795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026920
	I1019 16:58:12.912041  148795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/multinode-026920/id_rsa Username:docker}
	I1019 16:58:13.006983  148795 ssh_runner.go:195] Run: systemctl --version
	I1019 16:58:13.013927  148795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 16:58:13.027086  148795 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 16:58:13.088760  148795 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-10-19 16:58:13.078810218 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 16:58:13.089363  148795 kubeconfig.go:125] found "multinode-026920" server: "https://192.168.67.2:8443"
	I1019 16:58:13.089399  148795 api_server.go:166] Checking apiserver status ...
	I1019 16:58:13.089459  148795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 16:58:13.101648  148795 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1243/cgroup
	W1019 16:58:13.110685  148795 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1243/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1019 16:58:13.110744  148795 ssh_runner.go:195] Run: ls
	I1019 16:58:13.114755  148795 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1019 16:58:13.119025  148795 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1019 16:58:13.119050  148795 status.go:463] multinode-026920 apiserver status = Running (err=<nil>)
	I1019 16:58:13.119074  148795 status.go:176] multinode-026920 status: &{Name:multinode-026920 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1019 16:58:13.119094  148795 status.go:174] checking status of multinode-026920-m02 ...
	I1019 16:58:13.119345  148795 cli_runner.go:164] Run: docker container inspect multinode-026920-m02 --format={{.State.Status}}
	I1019 16:58:13.137802  148795 status.go:371] multinode-026920-m02 host status = "Running" (err=<nil>)
	I1019 16:58:13.137848  148795 host.go:66] Checking if "multinode-026920-m02" exists ...
	I1019 16:58:13.138163  148795 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-026920-m02
	I1019 16:58:13.157088  148795 host.go:66] Checking if "multinode-026920-m02" exists ...
	I1019 16:58:13.157384  148795 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 16:58:13.157425  148795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026920-m02
	I1019 16:58:13.175736  148795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21683-3731/.minikube/machines/multinode-026920-m02/id_rsa Username:docker}
	I1019 16:58:13.269460  148795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 16:58:13.282219  148795 status.go:176] multinode-026920-m02 status: &{Name:multinode-026920-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1019 16:58:13.282260  148795 status.go:174] checking status of multinode-026920-m03 ...
	I1019 16:58:13.282500  148795 cli_runner.go:164] Run: docker container inspect multinode-026920-m03 --format={{.State.Status}}
	I1019 16:58:13.300622  148795 status.go:371] multinode-026920-m03 host status = "Stopped" (err=<nil>)
	I1019 16:58:13.300643  148795 status.go:384] host is not running, skipping remaining checks
	I1019 16:58:13.300649  148795 status.go:176] multinode-026920-m03 status: &{Name:multinode-026920-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.25s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026920 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-026920 node start m03 -v=5 --alsologtostderr: (6.520208324s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026920 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.24s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (78.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-026920
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-026920
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-026920: (31.443541202s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-026920 --wait=true -v=5 --alsologtostderr
E1019 16:59:18.530322    7228 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/functional-507544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-026920 --wait=true -v=5 --alsologtostderr: (47.130114547s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-026920
--- PASS: TestMultiNode/serial/RestartKeepsNodes (78.68s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026920 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-026920 node delete m03: (4.677391228s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026920 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.29s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (28.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026920 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-026920 stop: (28.410506958s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026920 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-026920 status: exit status 7 (91.991342ms)

                                                
                                                
-- stdout --
	multinode-026920
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-026920-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026920 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-026920 status --alsologtostderr: exit status 7 (89.541062ms)

                                                
                                                
-- stdout --
	multinode-026920
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-026920-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 17:00:13.062866  158491 out.go:360] Setting OutFile to fd 1 ...
	I1019 17:00:13.063132  158491 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:00:13.063142  158491 out.go:374] Setting ErrFile to fd 2...
	I1019 17:00:13.063146  158491 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:00:13.063349  158491 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3731/.minikube/bin
	I1019 17:00:13.063526  158491 out.go:368] Setting JSON to false
	I1019 17:00:13.063555  158491 mustload.go:66] Loading cluster: multinode-026920
	I1019 17:00:13.063639  158491 notify.go:221] Checking for updates...
	I1019 17:00:13.063916  158491 config.go:182] Loaded profile config "multinode-026920": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:00:13.063930  158491 status.go:174] checking status of multinode-026920 ...
	I1019 17:00:13.064342  158491 cli_runner.go:164] Run: docker container inspect multinode-026920 --format={{.State.Status}}
	I1019 17:00:13.083730  158491 status.go:371] multinode-026920 host status = "Stopped" (err=<nil>)
	I1019 17:00:13.083762  158491 status.go:384] host is not running, skipping remaining checks
	I1019 17:00:13.083784  158491 status.go:176] multinode-026920 status: &{Name:multinode-026920 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1019 17:00:13.083824  158491 status.go:174] checking status of multinode-026920-m02 ...
	I1019 17:00:13.084153  158491 cli_runner.go:164] Run: docker container inspect multinode-026920-m02 --format={{.State.Status}}
	I1019 17:00:13.103387  158491 status.go:371] multinode-026920-m02 host status = "Stopped" (err=<nil>)
	I1019 17:00:13.103438  158491 status.go:384] host is not running, skipping remaining checks
	I1019 17:00:13.103448  158491 status.go:176] multinode-026920-m02 status: &{Name:multinode-026920-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (28.59s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (46.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-026920 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1019 17:00:52.168238    7228 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-026920 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (45.445054316s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026920 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (46.04s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (24.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-026920
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-026920-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-026920-m02 --driver=docker  --container-runtime=crio: exit status 14 (67.441837ms)

                                                
                                                
-- stdout --
	* [multinode-026920-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-3731/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-3731/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-026920-m02' is duplicated with machine name 'multinode-026920-m02' in profile 'multinode-026920'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-026920-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-026920-m03 --driver=docker  --container-runtime=crio: (21.887284025s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-026920
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-026920: exit status 80 (280.361377ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-026920 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-026920-m03 already exists in multinode-026920-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-026920-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-026920-m03: (2.429042189s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (24.71s)

                                                
                                    
x
+
TestScheduledStopUnix (98.68s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-575331 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-575331 --memory=3072 --driver=docker  --container-runtime=crio: (22.485371748s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-575331 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-575331 -n scheduled-stop-575331
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-575331 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1019 17:08:58.879034    7228 retry.go:31] will retry after 131.381µs: open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/scheduled-stop-575331/pid: no such file or directory
I1019 17:08:58.880233    7228 retry.go:31] will retry after 162.007µs: open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/scheduled-stop-575331/pid: no such file or directory
I1019 17:08:58.881380    7228 retry.go:31] will retry after 242.448µs: open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/scheduled-stop-575331/pid: no such file or directory
I1019 17:08:58.882527    7228 retry.go:31] will retry after 496.678µs: open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/scheduled-stop-575331/pid: no such file or directory
I1019 17:08:58.883684    7228 retry.go:31] will retry after 332.988µs: open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/scheduled-stop-575331/pid: no such file or directory
I1019 17:08:58.884826    7228 retry.go:31] will retry after 1.115603ms: open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/scheduled-stop-575331/pid: no such file or directory
I1019 17:08:58.887034    7228 retry.go:31] will retry after 1.26264ms: open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/scheduled-stop-575331/pid: no such file or directory
I1019 17:08:58.889251    7228 retry.go:31] will retry after 1.357615ms: open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/scheduled-stop-575331/pid: no such file or directory
I1019 17:08:58.891481    7228 retry.go:31] will retry after 3.805871ms: open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/scheduled-stop-575331/pid: no such file or directory
I1019 17:08:58.895700    7228 retry.go:31] will retry after 3.691023ms: open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/scheduled-stop-575331/pid: no such file or directory
I1019 17:08:58.899946    7228 retry.go:31] will retry after 7.175648ms: open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/scheduled-stop-575331/pid: no such file or directory
I1019 17:08:58.908303    7228 retry.go:31] will retry after 11.295358ms: open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/scheduled-stop-575331/pid: no such file or directory
I1019 17:08:58.920565    7228 retry.go:31] will retry after 9.415896ms: open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/scheduled-stop-575331/pid: no such file or directory
I1019 17:08:58.930852    7228 retry.go:31] will retry after 25.279541ms: open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/scheduled-stop-575331/pid: no such file or directory
I1019 17:08:58.957143    7228 retry.go:31] will retry after 30.081164ms: open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/scheduled-stop-575331/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-575331 --cancel-scheduled
E1019 17:09:18.530353    7228 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/functional-507544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-575331 -n scheduled-stop-575331
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-575331
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-575331 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-575331
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-575331: exit status 7 (68.860853ms)

                                                
                                                
-- stdout --
	scheduled-stop-575331
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-575331 -n scheduled-stop-575331
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-575331 -n scheduled-stop-575331: exit status 7 (69.178582ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-575331" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-575331
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-575331: (4.82276282s)
--- PASS: TestScheduledStopUnix (98.68s)

                                                
                                    
x
+
TestInsufficientStorage (10.54s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-865568 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-865568 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.019517337s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d6ac7f1c-e164-4855-916c-acbd28727876","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-865568] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"658464fc-d61d-4ce5-88c4-eb0ef4b09f4f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21683"}}
	{"specversion":"1.0","id":"1829e688-0ee9-4a39-a7af-c53d13624aee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2186d6c6-cf19-45d5-af0e-594d7f1ba842","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21683-3731/kubeconfig"}}
	{"specversion":"1.0","id":"388c2ff3-5dba-4844-932e-84d251bfdf18","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-3731/.minikube"}}
	{"specversion":"1.0","id":"f01d86e3-eefc-4ad3-96ee-8bed98f1ab59","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"a48df5be-f281-425d-b469-c4ee5c411286","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5832f44f-a824-4563-a247-5c028624885d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"c62a449c-c4c6-4bad-98af-5e6d922d6f71","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"bb909a99-a5fe-4a66-ad0d-18d6aa3a8963","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"00c6183b-e061-4a90-8615-c549e85eb7d5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"622c46fd-c7b6-42d8-9f0b-9e39de41fb41","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-865568\" primary control-plane node in \"insufficient-storage-865568\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"cc0f78e0-415c-4cc3-bbf7-a29f4648690a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1760609789-21757 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"2e91e546-ad8e-4c3e-9a45-3bc50197e08b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"0bfe1586-f954-429c-b8bf-7a3694e69108","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-865568 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-865568 --output=json --layout=cluster: exit status 7 (283.938345ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-865568","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-865568","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1019 17:10:22.944501  180279 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-865568" does not appear in /home/jenkins/minikube-integration/21683-3731/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-865568 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-865568 --output=json --layout=cluster: exit status 7 (284.840512ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-865568","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-865568","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1019 17:10:23.230571  180388 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-865568" does not appear in /home/jenkins/minikube-integration/21683-3731/kubeconfig
	E1019 17:10:23.241167  180388 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/insufficient-storage-865568/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-865568" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-865568
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-865568: (1.949428368s)
--- PASS: TestInsufficientStorage (10.54s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (64.87s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.150682724 start -p running-upgrade-857401 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.150682724 start -p running-upgrade-857401 --memory=3072 --vm-driver=docker  --container-runtime=crio: (39.799204471s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-857401 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-857401 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (21.89871961s)
helpers_test.go:175: Cleaning up "running-upgrade-857401" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-857401
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-857401: (2.644398708s)
--- PASS: TestRunningBinaryUpgrade (64.87s)

                                                
                                    
x
+
TestKubernetesUpgrade (305.37s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-318879 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-318879 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (23.951872656s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-318879
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-318879: (1.93516741s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-318879 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-318879 status --format={{.Host}}: exit status 7 (79.734926ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-318879 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-318879 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m30.065716047s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-318879 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-318879 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-318879 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (574.882175ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-318879] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-3731/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-3731/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-318879
	    minikube start -p kubernetes-upgrade-318879 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3188792 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-318879 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-318879 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-318879 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (6.141025566s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-318879" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-318879
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-318879: (2.461387852s)
--- PASS: TestKubernetesUpgrade (305.37s)

                                                
                                    
x
+
TestMissingContainerUpgrade (63.67s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.1309980342 start -p missing-upgrade-447724 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.1309980342 start -p missing-upgrade-447724 --memory=3072 --driver=docker  --container-runtime=crio: (24.476877987s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-447724
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-447724: (1.747098058s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-447724
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-447724 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-447724 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (34.626891666s)
helpers_test.go:175: Cleaning up "missing-upgrade-447724" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-447724
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-447724: (2.387311584s)
--- PASS: TestMissingContainerUpgrade (63.67s)

                                                
                                    
x
+
TestPause/serial/Start (55.31s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-111127 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-111127 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (55.314068418s)
--- PASS: TestPause/serial/Start (55.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-212695 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-212695 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (84.159204ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-212695] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-3731/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-3731/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (37.56s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-212695 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-212695 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (37.17757939s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-212695 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (37.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (24.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-212695 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-212695 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (22.036590974s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-212695 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-212695 status -o json: exit status 2 (349.198694ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-212695","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-212695
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-212695: (2.220678824s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (24.61s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.55s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-111127 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-111127 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (6.52792132s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-624324 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-624324 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (213.024258ms)

                                                
                                                
-- stdout --
	* [false-624324] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-3731/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-3731/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 17:11:26.766420  196110 out.go:360] Setting OutFile to fd 1 ...
	I1019 17:11:26.766776  196110 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:11:26.766786  196110 out.go:374] Setting ErrFile to fd 2...
	I1019 17:11:26.766791  196110 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:11:26.767337  196110 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3731/.minikube/bin
	I1019 17:11:26.768127  196110 out.go:368] Setting JSON to false
	I1019 17:11:26.771969  196110 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3233,"bootTime":1760890654,"procs":286,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 17:11:26.772130  196110 start.go:143] virtualization: kvm guest
	I1019 17:11:26.774498  196110 out.go:179] * [false-624324] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 17:11:26.775850  196110 notify.go:221] Checking for updates...
	I1019 17:11:26.778586  196110 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 17:11:26.779965  196110 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 17:11:26.781459  196110 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-3731/kubeconfig
	I1019 17:11:26.783211  196110 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-3731/.minikube
	I1019 17:11:26.784379  196110 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 17:11:26.785558  196110 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 17:11:26.787447  196110 config.go:182] Loaded profile config "NoKubernetes-212695": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1019 17:11:26.787656  196110 config.go:182] Loaded profile config "pause-111127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:11:26.787809  196110 config.go:182] Loaded profile config "running-upgrade-857401": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1019 17:11:26.787937  196110 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 17:11:26.819373  196110 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1019 17:11:26.819467  196110 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:11:26.894407  196110 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:59 OomKillDisable:false NGoroutines:69 SystemTime:2025-10-19 17:11:26.882039405 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 17:11:26.894542  196110 docker.go:319] overlay module found
	I1019 17:11:26.899677  196110 out.go:179] * Using the docker driver based on user configuration
	I1019 17:11:26.901152  196110 start.go:309] selected driver: docker
	I1019 17:11:26.901172  196110 start.go:930] validating driver "docker" against <nil>
	I1019 17:11:26.901201  196110 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 17:11:26.903330  196110 out.go:203] 
	W1019 17:11:26.904835  196110 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1019 17:11:26.909613  196110 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-624324 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-624324

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-624324

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-624324

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-624324

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-624324

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-624324

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-624324

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-624324

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-624324

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-624324

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624324"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624324"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624324"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-624324

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624324"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624324"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-624324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-624324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-624324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-624324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-624324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-624324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-624324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-624324" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624324"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624324"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624324"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624324"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624324"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-624324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-624324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-624324" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624324"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624324"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624324"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624324"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624324"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21683-3731/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 19 Oct 2025 17:11:25 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-111127
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21683-3731/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 19 Oct 2025 17:11:26 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: running-upgrade-857401
contexts:
- context:
cluster: pause-111127
extensions:
- extension:
last-update: Sun, 19 Oct 2025 17:11:25 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-111127
name: pause-111127
- context:
cluster: running-upgrade-857401
extensions:
- extension:
last-update: Sun, 19 Oct 2025 17:11:26 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: running-upgrade-857401
name: running-upgrade-857401
current-context: running-upgrade-857401
kind: Config
users:
- name: pause-111127
user:
client-certificate: /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/pause-111127/client.crt
client-key: /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/pause-111127/client.key
- name: running-upgrade-857401
user:
client-certificate: /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/running-upgrade-857401/client.crt
client-key: /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/running-upgrade-857401/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-624324

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624324"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624324"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624324"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624324"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624324"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624324"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624324"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624324"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624324"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624324"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624324"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624324"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624324"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624324"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624324"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624324"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624324"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624324"

                                                
                                                
----------------------- debugLogs end: false-624324 [took: 3.578756238s] --------------------------------
helpers_test.go:175: Cleaning up "false-624324" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-624324
--- PASS: TestNetworkPlugins/group/false (3.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-212695 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-212695 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (5.073999896s)
--- PASS: TestNoKubernetes/serial/Start (5.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-212695 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-212695 "sudo systemctl is-active --quiet service kubelet": exit status 1 (321.233755ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (12.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-212695
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-212695: (12.943086712s)
--- PASS: TestNoKubernetes/serial/Stop (12.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-212695 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-212695 --driver=docker  --container-runtime=crio: (6.838364914s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-212695 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-212695 "sudo systemctl is-active --quiet service kubelet": exit status 1 (339.198248ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.4s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.40s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (45.77s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.3991405294 start -p stopped-upgrade-659566 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.3991405294 start -p stopped-upgrade-659566 --memory=3072 --vm-driver=docker  --container-runtime=crio: (26.497909133s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.3991405294 -p stopped-upgrade-659566 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.3991405294 -p stopped-upgrade-659566 stop: (3.405612178s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-659566 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1019 17:12:49.094872    7228 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-659566 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (15.862811528s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (45.77s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.96s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-659566
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (49.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-904967 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-904967 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (49.901781087s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (49.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (53.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-806996 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-806996 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (53.482844033s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (53.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-904967 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f8226db2-996c-424a-b64b-99ee92815957] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f8226db2-996c-424a-b64b-99ee92815957] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.00427694s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-904967 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (16.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-904967 --alsologtostderr -v=3
E1019 17:14:18.531288    7228 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/functional-507544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-904967 --alsologtostderr -v=3: (16.09702644s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (16.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-904967 -n old-k8s-version-904967
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-904967 -n old-k8s-version-904967: exit status 7 (78.038454ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-904967 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (51.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-904967 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-904967 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (50.984029561s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-904967 -n old-k8s-version-904967
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (51.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (7.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-806996 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [fce9d63b-e499-49e5-92ea-520aaa56468e] Pending
helpers_test.go:352: "busybox" [fce9d63b-e499-49e5-92ea-520aaa56468e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [fce9d63b-e499-49e5-92ea-520aaa56468e] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 7.004133668s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-806996 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (7.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (16.72s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-806996 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-806996 --alsologtostderr -v=3: (16.717608425s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (16.72s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-806996 -n no-preload-806996
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-806996 -n no-preload-806996: exit status 7 (72.575519ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-806996 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (45.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-806996 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-806996 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (44.895569334s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-806996 -n no-preload-806996
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (45.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-9tv62" [bf42ac24-dcdc-400d-a17f-b022ff5102f1] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00365006s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (40.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-090139 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-090139 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (40.720954444s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (40.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-9tv62" [bf42ac24-dcdc-400d-a17f-b022ff5102f1] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003995135s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-904967 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-904967 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (40.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-663015 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-663015 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (40.453747608s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (40.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-8t886" [21d75a06-e2e2-4dc0-b5d9-58b551d6f1e7] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004034489s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-8t886" [21d75a06-e2e2-4dc0-b5d9-58b551d6f1e7] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00408799s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-806996 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-806996 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-090139 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [3863b530-fafc-49ad-aaf5-39e7efa20789] Pending
helpers_test.go:352: "busybox" [3863b530-fafc-49ad-aaf5-39e7efa20789] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [3863b530-fafc-49ad-aaf5-39e7efa20789] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004116346s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-090139 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (27.8s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-848035 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-848035 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (27.802087992s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (27.80s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (18.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-090139 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-090139 --alsologtostderr -v=3: (18.112429382s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (18.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-663015 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [bf66eee5-05b6-4586-8e99-ab43b66c547d] Pending
helpers_test.go:352: "busybox" [bf66eee5-05b6-4586-8e99-ab43b66c547d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [bf66eee5-05b6-4586-8e99-ab43b66c547d] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 7.004058208s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-663015 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (16.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-663015 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-663015 --alsologtostderr -v=3: (16.242130823s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (16.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-090139 -n embed-certs-090139
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-090139 -n embed-certs-090139: exit status 7 (73.917967ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-090139 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (44.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-090139 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-090139 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (44.08256824s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-090139 -n embed-certs-090139
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (44.47s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-848035 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-848035 --alsologtostderr -v=3: (2.41890435s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.42s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-848035 -n newest-cni-848035
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-848035 -n newest-cni-848035: exit status 7 (68.597424ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-848035 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-848035 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-848035 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (10.636946262s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-848035 -n newest-cni-848035
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (11.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-663015 -n default-k8s-diff-port-663015
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-663015 -n default-k8s-diff-port-663015: exit status 7 (75.083344ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-663015 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (49.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-663015 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-663015 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (49.226350337s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-663015 -n default-k8s-diff-port-663015
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (49.58s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-848035 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (44.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-624324 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-624324 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (44.939018593s)
--- PASS: TestNetworkPlugins/group/auto/Start (44.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (41.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-624324 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-624324 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (41.507899949s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (41.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-9d29n" [8667f19d-4c29-4376-8168-ba8ac48bde56] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002804348s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-9d29n" [8667f19d-4c29-4376-8168-ba8ac48bde56] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003583619s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-090139 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-090139 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-kr5fp" [caaa2764-cc2e-4a6c-a8b3-45bb63d04684] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00373095s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (55.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-624324 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E1019 17:17:32.170183    7228 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-624324 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (55.911088504s)
--- PASS: TestNetworkPlugins/group/calico/Start (55.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-kr5fp" [caaa2764-cc2e-4a6c-a8b3-45bb63d04684] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0042916s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-663015 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-624324 "pgrep -a kubelet"
I1019 17:17:37.854989    7228 config.go:182] Loaded profile config "auto-624324": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-624324 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-brs7j" [167ef632-f37a-4859-91eb-e6413a9eef86] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-brs7j" [167ef632-f37a-4859-91eb-e6413a9eef86] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.003759546s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-663015 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-624324 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-624324 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-624324 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (54.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-624324 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E1019 17:17:49.095185    7228 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/addons-557770/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-624324 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (54.115294579s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (54.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-llrxf" [7193e591-629c-4c02-9ced-944fe40c7d85] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004680245s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-624324 "pgrep -a kubelet"
I1019 17:17:55.714850    7228 config.go:182] Loaded profile config "kindnet-624324": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-624324 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-bngb9" [d6f77a18-e1ef-42ff-aa0c-fe7af68ab762] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-bngb9" [d6f77a18-e1ef-42ff-aa0c-fe7af68ab762] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004309422s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-624324 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-624324 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-624324 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (67.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-624324 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-624324 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m7.616686124s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (67.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-d9thg" [96c4e4b3-1337-42a2-9dd0-00ad42c7f9db] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-d9thg" [96c4e4b3-1337-42a2-9dd0-00ad42c7f9db] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004034404s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (48.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-624324 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-624324 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (48.318479174s)
--- PASS: TestNetworkPlugins/group/flannel/Start (48.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-624324 "pgrep -a kubelet"
I1019 17:18:32.406657    7228 config.go:182] Loaded profile config "calico-624324": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (58.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-624324 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-9p2xm" [f7aac437-32ea-44f2-8bd1-e2b2aabb23cd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-9p2xm" [f7aac437-32ea-44f2-8bd1-e2b2aabb23cd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 58.004044717s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (58.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-624324 "pgrep -a kubelet"
I1019 17:18:42.918813    7228 config.go:182] Loaded profile config "custom-flannel-624324": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (8.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-624324 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-n98km" [1d48f4a5-df25-46e3-a06d-04eef5472e98] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-n98km" [1d48f4a5-df25-46e3-a06d-04eef5472e98] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 8.004115255s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (8.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-624324 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-624324 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-624324 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (67.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-624324 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1019 17:19:12.204845    7228 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/old-k8s-version-904967/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-624324 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m7.247597777s)
--- PASS: TestNetworkPlugins/group/bridge/Start (67.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-p4zgq" [c66f7f1b-7b87-4187-858a-b3e465cd2b34] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00367924s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-624324 "pgrep -a kubelet"
I1019 17:19:16.263303    7228 config.go:182] Loaded profile config "enable-default-cni-624324": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-624324 replace --force -f testdata/netcat-deployment.yaml
I1019 17:19:16.677128    7228 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1019 17:19:16.679396    7228 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-pfsxm" [9f86547b-0e6f-4558-9a7f-7aa0c5939749] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1019 17:19:18.530722    7228 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/functional-507544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-pfsxm" [9f86547b-0e6f-4558-9a7f-7aa0c5939749] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.004301607s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-624324 "pgrep -a kubelet"
I1019 17:19:21.811467    7228 config.go:182] Loaded profile config "flannel-624324": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-624324 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-s4j7t" [4152137e-87e8-4758-96db-5927d969ee47] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-s4j7t" [4152137e-87e8-4758-96db-5927d969ee47] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.005009224s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-624324 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-624324 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-624324 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-624324 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-624324 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-624324 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-624324 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-624324 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-624324 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-624324 "pgrep -a kubelet"
I1019 17:20:18.859051    7228 config.go:182] Loaded profile config "bridge-624324": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-624324 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-hmf87" [40c2015c-4528-4dcc-91bd-935e641f8644] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-hmf87" [40c2015c-4528-4dcc-91bd-935e641f8644] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.00411557s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-624324 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-624324 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-624324 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.09s)

                                                
                                    

Test skip (26/327)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-858297" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-858297
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-624324 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-624324

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-624324

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-624324

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-624324

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-624324

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-624324

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-624324

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-624324

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-624324

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-624324

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624324"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624324"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624324"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-624324

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624324"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624324"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-624324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-624324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-624324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-624324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-624324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-624324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-624324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-624324" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624324"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624324"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624324"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624324"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624324"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-624324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-624324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-624324" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624324"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624324"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624324"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624324"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624324"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21683-3731/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 19 Oct 2025 17:11:01 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: NoKubernetes-212695
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21683-3731/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 19 Oct 2025 17:11:25 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-111127
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21683-3731/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 19 Oct 2025 17:11:10 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: running-upgrade-857401
contexts:
- context:
cluster: NoKubernetes-212695
extensions:
- extension:
last-update: Sun, 19 Oct 2025 17:11:01 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-212695
name: NoKubernetes-212695
- context:
cluster: pause-111127
extensions:
- extension:
last-update: Sun, 19 Oct 2025 17:11:25 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-111127
name: pause-111127
- context:
cluster: running-upgrade-857401
user: running-upgrade-857401
name: running-upgrade-857401
current-context: pause-111127
kind: Config
users:
- name: NoKubernetes-212695
user:
client-certificate: /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/NoKubernetes-212695/client.crt
client-key: /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/NoKubernetes-212695/client.key
- name: pause-111127
user:
client-certificate: /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/pause-111127/client.crt
client-key: /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/pause-111127/client.key
- name: running-upgrade-857401
user:
client-certificate: /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/running-upgrade-857401/client.crt
client-key: /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/running-upgrade-857401/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-624324

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624324"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624324"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624324"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624324"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624324"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624324"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624324"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624324"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624324"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624324"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624324"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624324"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624324"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624324"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624324"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624324"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624324"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624324"

                                                
                                                
----------------------- debugLogs end: kubenet-624324 [took: 3.472607335s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-624324" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-624324
--- SKIP: TestNetworkPlugins/group/kubenet (3.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-624324 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-624324

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-624324

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-624324

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-624324

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-624324

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-624324

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-624324

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-624324

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-624324

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-624324

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624324"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624324"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624324"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-624324

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624324"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624324"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-624324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-624324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-624324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-624324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-624324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-624324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-624324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-624324" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624324"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624324"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624324"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624324"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624324"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-624324

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-624324

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-624324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-624324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-624324

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-624324

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-624324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-624324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-624324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-624324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-624324" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624324"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624324"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624324"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624324"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624324"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21683-3731/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 19 Oct 2025 17:11:25 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-111127
contexts:
- context:
cluster: pause-111127
extensions:
- extension:
last-update: Sun, 19 Oct 2025 17:11:25 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-111127
name: pause-111127
current-context: ""
kind: Config
users:
- name: pause-111127
user:
client-certificate: /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/pause-111127/client.crt
client-key: /home/jenkins/minikube-integration/21683-3731/.minikube/profiles/pause-111127/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-624324

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624324"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624324"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624324"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624324"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624324"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624324"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624324"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624324"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624324"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624324"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624324"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624324"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624324"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624324"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624324"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624324"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624324"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-624324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624324"

                                                
                                                
----------------------- debugLogs end: cilium-624324 [took: 5.617045397s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-624324" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-624324
--- SKIP: TestNetworkPlugins/group/cilium (6.29s)

                                                
                                    
Copied to clipboard